I0501 15:11:56.905051 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0501 15:11:56.905235 7 e2e.go:124] Starting e2e run "54142f2e-34e6-44e6-afee-6db2eef92fa2" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588345915 - Will randomize all specs Will run 275 of 4992 specs May 1 15:11:56.957: INFO: >>> kubeConfig: /root/.kube/config May 1 15:11:56.963: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 1 15:11:56.989: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 1 15:11:57.045: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 1 15:11:57.045: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 1 15:11:57.045: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 1 15:11:57.054: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 1 15:11:57.054: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 1 15:11:57.054: INFO: e2e test version: v1.18.2 May 1 15:11:57.055: INFO: kube-apiserver version: v1.18.2 May 1 15:11:57.055: INFO: >>> kubeConfig: /root/.kube/config May 1 15:11:57.059: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:11:57.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion May 1 15:11:57.132: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition May 1 15:11:57.140: INFO: Waiting up to 5m0s for pod "var-expansion-0d3c1484-1885-4262-9b7b-26e988e5b965" in namespace "var-expansion-610" to be "Succeeded or Failed" May 1 15:11:57.143: INFO: Pod "var-expansion-0d3c1484-1885-4262-9b7b-26e988e5b965": Phase="Pending", Reason="", readiness=false. Elapsed: 3.093018ms May 1 15:11:59.458: INFO: Pod "var-expansion-0d3c1484-1885-4262-9b7b-26e988e5b965": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318030298s May 1 15:12:01.463: INFO: Pod "var-expansion-0d3c1484-1885-4262-9b7b-26e988e5b965": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322387487s May 1 15:12:03.466: INFO: Pod "var-expansion-0d3c1484-1885-4262-9b7b-26e988e5b965": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.325987329s STEP: Saw pod success May 1 15:12:03.466: INFO: Pod "var-expansion-0d3c1484-1885-4262-9b7b-26e988e5b965" satisfied condition "Succeeded or Failed" May 1 15:12:03.468: INFO: Trying to get logs from node kali-worker pod var-expansion-0d3c1484-1885-4262-9b7b-26e988e5b965 container dapi-container: STEP: delete the pod May 1 15:12:03.578: INFO: Waiting for pod var-expansion-0d3c1484-1885-4262-9b7b-26e988e5b965 to disappear May 1 15:12:03.635: INFO: Pod var-expansion-0d3c1484-1885-4262-9b7b-26e988e5b965 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:12:03.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-610" for this suite. • [SLOW TEST:6.590 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":8,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:12:03.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-1dd7bc3a-1521-4079-80ff-b0fa302e6a56 May 1 15:12:04.169: INFO: Pod name my-hostname-basic-1dd7bc3a-1521-4079-80ff-b0fa302e6a56: Found 0 pods out of 1 May 1 15:12:09.174: INFO: Pod name my-hostname-basic-1dd7bc3a-1521-4079-80ff-b0fa302e6a56: Found 1 pods out of 1 May 1 15:12:09.174: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1dd7bc3a-1521-4079-80ff-b0fa302e6a56" are running May 1 15:12:09.178: INFO: Pod "my-hostname-basic-1dd7bc3a-1521-4079-80ff-b0fa302e6a56-pqdw4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 15:12:04 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 15:12:08 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 15:12:08 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 15:12:04 +0000 UTC Reason: Message:}]) May 1 15:12:09.178: INFO: Trying to dial the pod May 1 15:12:14.188: INFO: Controller my-hostname-basic-1dd7bc3a-1521-4079-80ff-b0fa302e6a56: Got expected result from replica 1 [my-hostname-basic-1dd7bc3a-1521-4079-80ff-b0fa302e6a56-pqdw4]: "my-hostname-basic-1dd7bc3a-1521-4079-80ff-b0fa302e6a56-pqdw4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:12:14.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7266" for this suite. • [SLOW TEST:10.546 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":2,"skipped":22,"failed":0} S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:12:14.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 1 15:12:15.331: INFO: Pod name wrapped-volume-race-10344bce-8cc9-432c-83cf-cf1c17dc9148: Found 0 pods out of 5 May 1 15:12:20.342: INFO: Pod name wrapped-volume-race-10344bce-8cc9-432c-83cf-cf1c17dc9148: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-10344bce-8cc9-432c-83cf-cf1c17dc9148 in namespace emptydir-wrapper-6363, will wait for the garbage collector to delete the pods May 1 15:12:36.497: INFO: Deleting ReplicationController wrapped-volume-race-10344bce-8cc9-432c-83cf-cf1c17dc9148 took: 14.538269ms May 1 15:12:36.897: INFO: Terminating ReplicationController wrapped-volume-race-10344bce-8cc9-432c-83cf-cf1c17dc9148 pods took: 400.434354ms STEP: Creating RC which spawns configmap-volume pods May 1 15:12:55.242: INFO: Pod name wrapped-volume-race-f6e60410-47a5-40fc-a8a2-472b966855a8: Found 0 pods out of 5 May 1 15:13:00.664: INFO: Pod name wrapped-volume-race-f6e60410-47a5-40fc-a8a2-472b966855a8: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f6e60410-47a5-40fc-a8a2-472b966855a8 in namespace emptydir-wrapper-6363, will wait for the garbage collector to delete the pods May 1 15:13:18.713: INFO: Deleting ReplicationController wrapped-volume-race-f6e60410-47a5-40fc-a8a2-472b966855a8 took: 1.102755089s May 1 15:13:19.714: INFO: Terminating ReplicationController wrapped-volume-race-f6e60410-47a5-40fc-a8a2-472b966855a8 pods took: 1.000278167s STEP: Creating RC which spawns configmap-volume pods May 1 15:13:34.207: INFO: Pod name wrapped-volume-race-66bb6140-1240-4464-9018-46cb6fa3ca9d: Found 0 pods out of 5 May 1 15:13:39.420: INFO: Pod name wrapped-volume-race-66bb6140-1240-4464-9018-46cb6fa3ca9d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-66bb6140-1240-4464-9018-46cb6fa3ca9d in namespace emptydir-wrapper-6363, will wait for the garbage collector to delete the pods May 1 15:13:55.519: INFO: Deleting ReplicationController wrapped-volume-race-66bb6140-1240-4464-9018-46cb6fa3ca9d took: 18.9484ms May 1 15:13:57.119: INFO: Terminating ReplicationController wrapped-volume-race-66bb6140-1240-4464-9018-46cb6fa3ca9d pods took: 1.600251374s STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:14:15.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6363" for this suite. • [SLOW TEST:120.863 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":3,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:14:15.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 1 15:14:15.439: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa6e97b9-a7ec-43a6-a99e-5b5171c5fdd1" in namespace "projected-1016" to be "Succeeded or Failed" May 1 15:14:15.454: INFO: Pod "downwardapi-volume-fa6e97b9-a7ec-43a6-a99e-5b5171c5fdd1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.259862ms May 1 15:14:17.562: INFO: Pod "downwardapi-volume-fa6e97b9-a7ec-43a6-a99e-5b5171c5fdd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123182813s May 1 15:14:19.626: INFO: Pod "downwardapi-volume-fa6e97b9-a7ec-43a6-a99e-5b5171c5fdd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.187057402s STEP: Saw pod success May 1 15:14:19.626: INFO: Pod "downwardapi-volume-fa6e97b9-a7ec-43a6-a99e-5b5171c5fdd1" satisfied condition "Succeeded or Failed" May 1 15:14:19.629: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-fa6e97b9-a7ec-43a6-a99e-5b5171c5fdd1 container client-container: STEP: delete the pod May 1 15:14:20.004: INFO: Waiting for pod downwardapi-volume-fa6e97b9-a7ec-43a6-a99e-5b5171c5fdd1 to disappear May 1 15:14:20.076: INFO: Pod downwardapi-volume-fa6e97b9-a7ec-43a6-a99e-5b5171c5fdd1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:14:20.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1016" for this suite. • [SLOW TEST:5.167 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":50,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:14:20.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD May 1 15:14:20.706: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:14:38.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1488" for this suite. • [SLOW TEST:18.187 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":5,"skipped":51,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:14:38.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-6894921d-8ab9-48fa-bea3-4cc418ee1f66 STEP: Creating a pod to test consume secrets May 1 15:14:38.541: INFO: Waiting up to 5m0s for pod "pod-secrets-c3fdb34b-bff8-43b9-bc07-a19ff4d790f6" in namespace "secrets-3361" to be "Succeeded or Failed" May 1 15:14:38.545: INFO: Pod "pod-secrets-c3fdb34b-bff8-43b9-bc07-a19ff4d790f6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.913433ms May 1 15:14:40.549: INFO: Pod "pod-secrets-c3fdb34b-bff8-43b9-bc07-a19ff4d790f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008097466s May 1 15:14:42.553: INFO: Pod "pod-secrets-c3fdb34b-bff8-43b9-bc07-a19ff4d790f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012005794s STEP: Saw pod success May 1 15:14:42.553: INFO: Pod "pod-secrets-c3fdb34b-bff8-43b9-bc07-a19ff4d790f6" satisfied condition "Succeeded or Failed" May 1 15:14:42.556: INFO: Trying to get logs from node kali-worker pod pod-secrets-c3fdb34b-bff8-43b9-bc07-a19ff4d790f6 container secret-volume-test: STEP: delete the pod May 1 15:14:42.610: INFO: Waiting for pod pod-secrets-c3fdb34b-bff8-43b9-bc07-a19ff4d790f6 to disappear May 1 15:14:42.763: INFO: Pod pod-secrets-c3fdb34b-bff8-43b9-bc07-a19ff4d790f6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:14:42.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3361" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":62,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:14:42.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 1 15:14:43.195: INFO: Waiting up to 5m0s for pod "downwardapi-volume-376c5252-1f20-4d34-8bb0-0305bc7547ef" in namespace "projected-3135" to be "Succeeded or Failed" May 1 15:14:43.222: INFO: Pod "downwardapi-volume-376c5252-1f20-4d34-8bb0-0305bc7547ef": Phase="Pending", Reason="", readiness=false. Elapsed: 27.502926ms May 1 15:14:45.226: INFO: Pod "downwardapi-volume-376c5252-1f20-4d34-8bb0-0305bc7547ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031296933s May 1 15:14:47.308: INFO: Pod "downwardapi-volume-376c5252-1f20-4d34-8bb0-0305bc7547ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113125908s May 1 15:14:49.533: INFO: Pod "downwardapi-volume-376c5252-1f20-4d34-8bb0-0305bc7547ef": Phase="Running", Reason="", readiness=true. Elapsed: 6.338363996s May 1 15:14:51.538: INFO: Pod "downwardapi-volume-376c5252-1f20-4d34-8bb0-0305bc7547ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.342651757s STEP: Saw pod success May 1 15:14:51.538: INFO: Pod "downwardapi-volume-376c5252-1f20-4d34-8bb0-0305bc7547ef" satisfied condition "Succeeded or Failed" May 1 15:14:51.540: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-376c5252-1f20-4d34-8bb0-0305bc7547ef container client-container: STEP: delete the pod May 1 15:14:51.674: INFO: Waiting for pod downwardapi-volume-376c5252-1f20-4d34-8bb0-0305bc7547ef to disappear May 1 15:14:51.742: INFO: Pod downwardapi-volume-376c5252-1f20-4d34-8bb0-0305bc7547ef no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:14:51.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3135" for this suite. • [SLOW TEST:9.144 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":69,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:14:51.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:14:52.469: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 1 15:14:56.248: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-552 create -f -' May 1 15:15:06.253: INFO: stderr: "" May 1 15:15:06.253: INFO: stdout: "e2e-test-crd-publish-openapi-6019-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 1 15:15:06.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-552 delete e2e-test-crd-publish-openapi-6019-crds test-cr' May 1 15:15:07.739: INFO: stderr: "" May 1 15:15:07.739: INFO: stdout: "e2e-test-crd-publish-openapi-6019-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 1 15:15:07.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-552 apply -f -' May 1 15:15:08.461: INFO: stderr: "" May 1 15:15:08.461: INFO: stdout: "e2e-test-crd-publish-openapi-6019-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 1 15:15:08.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-552 delete e2e-test-crd-publish-openapi-6019-crds test-cr' May 1 15:15:08.694: INFO: stderr: "" May 1 15:15:08.694: INFO: stdout: "e2e-test-crd-publish-openapi-6019-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 1 15:15:08.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6019-crds' May 1 15:15:09.580: INFO: stderr: "" May 1 15:15:09.580: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6019-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:15:13.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-552" for this suite. • [SLOW TEST:21.274 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":8,"skipped":77,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:15:13.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:15:26.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1494" for this suite. • [SLOW TEST:13.445 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":9,"skipped":103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:15:26.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 1 15:15:27.199: INFO: Waiting up to 5m0s for pod "downwardapi-volume-65181be7-d3fe-4f4b-8234-45b279e6fc26" in namespace "downward-api-7142" to be "Succeeded or Failed" May 1 15:15:27.255: INFO: Pod "downwardapi-volume-65181be7-d3fe-4f4b-8234-45b279e6fc26": Phase="Pending", Reason="", readiness=false. Elapsed: 55.845057ms May 1 15:15:29.615: INFO: Pod "downwardapi-volume-65181be7-d3fe-4f4b-8234-45b279e6fc26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.415853473s May 1 15:15:31.950: INFO: Pod "downwardapi-volume-65181be7-d3fe-4f4b-8234-45b279e6fc26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.750688621s May 1 15:15:34.355: INFO: Pod "downwardapi-volume-65181be7-d3fe-4f4b-8234-45b279e6fc26": Phase="Running", Reason="", readiness=true. Elapsed: 7.156055931s May 1 15:15:36.360: INFO: Pod "downwardapi-volume-65181be7-d3fe-4f4b-8234-45b279e6fc26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.160437608s STEP: Saw pod success May 1 15:15:36.360: INFO: Pod "downwardapi-volume-65181be7-d3fe-4f4b-8234-45b279e6fc26" satisfied condition "Succeeded or Failed" May 1 15:15:36.363: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-65181be7-d3fe-4f4b-8234-45b279e6fc26 container client-container: STEP: delete the pod May 1 15:15:36.420: INFO: Waiting for pod downwardapi-volume-65181be7-d3fe-4f4b-8234-45b279e6fc26 to disappear May 1 15:15:36.438: INFO: Pod downwardapi-volume-65181be7-d3fe-4f4b-8234-45b279e6fc26 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:15:36.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7142" for this suite. • [SLOW TEST:9.891 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":126,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:15:36.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 1 15:15:36.718: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3124ada-4ec8-4b24-b1eb-39217623f250" in namespace "downward-api-1619" to be "Succeeded or Failed" May 1 15:15:36.765: INFO: Pod "downwardapi-volume-b3124ada-4ec8-4b24-b1eb-39217623f250": Phase="Pending", Reason="", readiness=false. Elapsed: 46.743284ms May 1 15:15:38.841: INFO: Pod "downwardapi-volume-b3124ada-4ec8-4b24-b1eb-39217623f250": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123227793s May 1 15:15:41.063: INFO: Pod "downwardapi-volume-b3124ada-4ec8-4b24-b1eb-39217623f250": Phase="Pending", Reason="", readiness=false. Elapsed: 4.345058484s May 1 15:15:43.066: INFO: Pod "downwardapi-volume-b3124ada-4ec8-4b24-b1eb-39217623f250": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.348039063s STEP: Saw pod success May 1 15:15:43.066: INFO: Pod "downwardapi-volume-b3124ada-4ec8-4b24-b1eb-39217623f250" satisfied condition "Succeeded or Failed" May 1 15:15:43.068: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-b3124ada-4ec8-4b24-b1eb-39217623f250 container client-container: STEP: delete the pod May 1 15:15:43.115: INFO: Waiting for pod downwardapi-volume-b3124ada-4ec8-4b24-b1eb-39217623f250 to disappear May 1 15:15:43.248: INFO: Pod downwardapi-volume-b3124ada-4ec8-4b24-b1eb-39217623f250 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:15:43.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1619" for this suite. • [SLOW TEST:6.730 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":145,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:15:43.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:15:57.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-548" for this suite. • [SLOW TEST:14.647 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":12,"skipped":145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:15:57.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6480 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6480;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6480 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6480;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6480.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6480.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6480.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6480.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6480.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6480.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6480.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6480.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6480.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6480.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6480.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6480.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6480.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 163.159.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.159.163_udp@PTR;check="$$(dig +tcp +noall +answer +search 163.159.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.159.163_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6480 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6480;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6480 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6480;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6480.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6480.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6480.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6480.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6480.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6480.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6480.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6480.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6480.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6480.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6480.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6480.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6480.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 163.159.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.159.163_udp@PTR;check="$$(dig +tcp +noall +answer +search 163.159.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.159.163_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 15:16:08.280: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:08.283: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:08.286: INFO: Unable to read wheezy_udp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:08.289: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:08.292: INFO: Unable to read wheezy_udp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:08.294: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:08.297: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:08.300: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:08.322: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:08.324: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:08.327: INFO: Unable to read jessie_udp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:08.330: INFO: Unable to read jessie_tcp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:08.333: INFO: Unable to read jessie_udp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:08.335: INFO: Unable to read jessie_tcp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:08.338: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:08.340: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:08.357: INFO: Lookups using dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6480 wheezy_tcp@dns-test-service.dns-6480 wheezy_udp@dns-test-service.dns-6480.svc wheezy_tcp@dns-test-service.dns-6480.svc wheezy_udp@_http._tcp.dns-test-service.dns-6480.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6480.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6480 jessie_tcp@dns-test-service.dns-6480 jessie_udp@dns-test-service.dns-6480.svc jessie_tcp@dns-test-service.dns-6480.svc jessie_udp@_http._tcp.dns-test-service.dns-6480.svc jessie_tcp@_http._tcp.dns-test-service.dns-6480.svc] May 1 15:16:13.362: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:13.366: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:13.368: INFO: Unable to read wheezy_udp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:13.371: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:13.374: INFO: Unable to read wheezy_udp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:13.376: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:13.379: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:13.381: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:13.483: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:13.486: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:13.488: INFO: Unable to read jessie_udp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:13.492: INFO: Unable to read jessie_tcp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:13.495: INFO: Unable to read jessie_udp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:13.498: INFO: Unable to read jessie_tcp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:13.500: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:13.503: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:13.525: INFO: Lookups using dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6480 wheezy_tcp@dns-test-service.dns-6480 wheezy_udp@dns-test-service.dns-6480.svc wheezy_tcp@dns-test-service.dns-6480.svc wheezy_udp@_http._tcp.dns-test-service.dns-6480.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6480.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6480 jessie_tcp@dns-test-service.dns-6480 jessie_udp@dns-test-service.dns-6480.svc jessie_tcp@dns-test-service.dns-6480.svc jessie_udp@_http._tcp.dns-test-service.dns-6480.svc jessie_tcp@_http._tcp.dns-test-service.dns-6480.svc] May 1 15:16:18.362: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:18.366: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:18.370: INFO: Unable to read wheezy_udp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:18.373: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:18.377: INFO: Unable to read wheezy_udp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:18.380: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:18.383: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:18.392: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:18.411: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:18.414: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:18.416: INFO: Unable to read jessie_udp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:18.419: INFO: Unable to read jessie_tcp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:18.421: INFO: Unable to read jessie_udp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:18.424: INFO: Unable to read jessie_tcp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:18.426: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:18.429: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:18.446: INFO: Lookups using dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6480 wheezy_tcp@dns-test-service.dns-6480 wheezy_udp@dns-test-service.dns-6480.svc wheezy_tcp@dns-test-service.dns-6480.svc wheezy_udp@_http._tcp.dns-test-service.dns-6480.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6480.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6480 jessie_tcp@dns-test-service.dns-6480 jessie_udp@dns-test-service.dns-6480.svc jessie_tcp@dns-test-service.dns-6480.svc jessie_udp@_http._tcp.dns-test-service.dns-6480.svc jessie_tcp@_http._tcp.dns-test-service.dns-6480.svc] May 1 15:16:23.363: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:23.369: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:23.373: INFO: Unable to read wheezy_udp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:23.376: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:23.378: INFO: Unable to read wheezy_udp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:23.380: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:23.382: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:23.384: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:23.403: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:23.406: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:23.408: INFO: Unable to read jessie_udp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:23.411: INFO: Unable to read jessie_tcp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:23.414: INFO: Unable to read jessie_udp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:23.417: INFO: Unable to read jessie_tcp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:23.419: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:23.422: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:23.439: INFO: Lookups using dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6480 wheezy_tcp@dns-test-service.dns-6480 wheezy_udp@dns-test-service.dns-6480.svc wheezy_tcp@dns-test-service.dns-6480.svc wheezy_udp@_http._tcp.dns-test-service.dns-6480.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6480.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6480 jessie_tcp@dns-test-service.dns-6480 jessie_udp@dns-test-service.dns-6480.svc jessie_tcp@dns-test-service.dns-6480.svc jessie_udp@_http._tcp.dns-test-service.dns-6480.svc jessie_tcp@_http._tcp.dns-test-service.dns-6480.svc] May 1 15:16:28.362: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:28.366: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:28.369: INFO: Unable to read wheezy_udp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:28.372: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:28.375: INFO: Unable to read wheezy_udp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:28.378: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:28.382: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:28.385: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:28.402: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:28.410: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:28.459: INFO: Unable to read jessie_udp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:28.462: INFO: Unable to read jessie_tcp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:28.464: INFO: Unable to read jessie_udp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:28.467: INFO: Unable to read jessie_tcp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:28.469: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:28.472: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:28.517: INFO: Lookups using dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6480 wheezy_tcp@dns-test-service.dns-6480 wheezy_udp@dns-test-service.dns-6480.svc wheezy_tcp@dns-test-service.dns-6480.svc wheezy_udp@_http._tcp.dns-test-service.dns-6480.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6480.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6480 jessie_tcp@dns-test-service.dns-6480 jessie_udp@dns-test-service.dns-6480.svc jessie_tcp@dns-test-service.dns-6480.svc jessie_udp@_http._tcp.dns-test-service.dns-6480.svc jessie_tcp@_http._tcp.dns-test-service.dns-6480.svc] May 1 15:16:33.363: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:33.367: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:33.370: INFO: Unable to read wheezy_udp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:33.374: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:33.377: INFO: Unable to read wheezy_udp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:33.381: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:33.383: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:33.386: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:33.437: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:33.439: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:33.443: INFO: Unable to read jessie_udp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:33.445: INFO: Unable to read jessie_tcp@dns-test-service.dns-6480 from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:33.448: INFO: Unable to read jessie_udp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:33.452: INFO: Unable to read jessie_tcp@dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:33.457: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:33.461: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6480.svc from pod dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b: the server could not find the requested resource (get pods dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b) May 1 15:16:33.475: INFO: Lookups using dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6480 wheezy_tcp@dns-test-service.dns-6480 wheezy_udp@dns-test-service.dns-6480.svc wheezy_tcp@dns-test-service.dns-6480.svc wheezy_udp@_http._tcp.dns-test-service.dns-6480.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6480.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6480 jessie_tcp@dns-test-service.dns-6480 jessie_udp@dns-test-service.dns-6480.svc jessie_tcp@dns-test-service.dns-6480.svc jessie_udp@_http._tcp.dns-test-service.dns-6480.svc jessie_tcp@_http._tcp.dns-test-service.dns-6480.svc] May 1 15:16:38.473: INFO: DNS probes using dns-6480/dns-test-8539e9be-d0a7-4b2d-922e-a4a38ed1533b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:16:39.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6480" for this suite. • [SLOW TEST:41.165 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":13,"skipped":174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:16:39.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-25fb0569-6516-45a1-99c5-4d931763fea3 STEP: Creating a pod to test consume secrets May 1 15:16:39.184: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ae3fa033-7487-4204-a042-9bde06d4cc8e" in namespace "projected-4156" to be "Succeeded or Failed" May 1 15:16:39.199: INFO: Pod "pod-projected-secrets-ae3fa033-7487-4204-a042-9bde06d4cc8e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.87589ms May 1 15:16:41.218: INFO: Pod "pod-projected-secrets-ae3fa033-7487-4204-a042-9bde06d4cc8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033808541s May 1 15:16:43.222: INFO: Pod "pod-projected-secrets-ae3fa033-7487-4204-a042-9bde06d4cc8e": Phase="Running", Reason="", readiness=true. Elapsed: 4.037058202s May 1 15:16:45.236: INFO: Pod "pod-projected-secrets-ae3fa033-7487-4204-a042-9bde06d4cc8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051845138s STEP: Saw pod success May 1 15:16:45.236: INFO: Pod "pod-projected-secrets-ae3fa033-7487-4204-a042-9bde06d4cc8e" satisfied condition "Succeeded or Failed" May 1 15:16:45.238: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-ae3fa033-7487-4204-a042-9bde06d4cc8e container projected-secret-volume-test: STEP: delete the pod May 1 15:16:45.285: INFO: Waiting for pod pod-projected-secrets-ae3fa033-7487-4204-a042-9bde06d4cc8e to disappear May 1 15:16:45.308: INFO: Pod pod-projected-secrets-ae3fa033-7487-4204-a042-9bde06d4cc8e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:16:45.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4156" for this suite. • [SLOW TEST:6.247 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":198,"failed":0} [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:16:45.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0501 15:16:46.498192 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 15:16:46.498: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:16:46.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7951" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":15,"skipped":198,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:16:46.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 15:16:50.896: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 15:16:53.788: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943011, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943011, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943011, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943010, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:16:56.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943011, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943011, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943011, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943010, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:16:57.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943011, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943011, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943011, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943010, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:16:59.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943011, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943011, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943011, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943010, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 15:17:02.840: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:17:02.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4011-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:17:04.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1418" for this suite. STEP: Destroying namespace "webhook-1418-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.796 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":16,"skipped":245,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:17:04.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-8f42a591-e8a2-4db2-9238-18bb63e29362 STEP: Creating configMap with name cm-test-opt-upd-bcb5a176-b34c-4a6b-8fa4-b5c281206892 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8f42a591-e8a2-4db2-9238-18bb63e29362 STEP: Updating configmap cm-test-opt-upd-bcb5a176-b34c-4a6b-8fa4-b5c281206892 STEP: Creating configMap with name cm-test-opt-create-d4f41f92-b712-427a-b0ee-b2cc051678f3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:17:17.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7189" for this suite. • [SLOW TEST:13.544 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":17,"skipped":259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:17:17.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:17:18.000: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 1 15:17:18.039: INFO: Number of nodes with available pods: 0 May 1 15:17:18.039: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 1 15:17:18.098: INFO: Number of nodes with available pods: 0 May 1 15:17:18.098: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:19.123: INFO: Number of nodes with available pods: 0 May 1 15:17:19.123: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:20.103: INFO: Number of nodes with available pods: 0 May 1 15:17:20.103: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:21.328: INFO: Number of nodes with available pods: 0 May 1 15:17:21.328: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:22.102: INFO: Number of nodes with available pods: 0 May 1 15:17:22.102: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:23.101: INFO: Number of nodes with available pods: 1 May 1 15:17:23.101: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 1 15:17:23.220: INFO: Number of nodes with available pods: 1 May 1 15:17:23.220: INFO: Number of running nodes: 0, number of available pods: 1 May 1 15:17:24.363: INFO: Number of nodes with available pods: 0 May 1 15:17:24.363: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 1 15:17:24.878: INFO: Number of nodes with available pods: 0 May 1 15:17:24.878: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:26.028: INFO: Number of nodes with available pods: 0 May 1 15:17:26.028: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:27.004: INFO: Number of nodes with available pods: 0 May 1 15:17:27.004: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:28.478: INFO: Number of nodes with available pods: 0 May 1 15:17:28.478: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:28.881: INFO: Number of nodes with available pods: 0 May 1 15:17:28.881: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:29.882: INFO: Number of nodes with available pods: 0 May 1 15:17:29.882: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:30.882: INFO: Number of nodes with available pods: 0 May 1 15:17:30.882: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:32.222: INFO: Number of nodes with available pods: 0 May 1 15:17:32.222: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:32.883: INFO: Number of nodes with available pods: 0 May 1 15:17:32.883: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:33.903: INFO: Number of nodes with available pods: 0 May 1 15:17:33.903: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:34.882: INFO: Number of nodes with available pods: 0 May 1 15:17:34.882: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:36.076: INFO: Number of nodes with available pods: 0 May 1 15:17:36.076: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:36.890: INFO: Number of nodes with available pods: 0 May 1 15:17:36.890: INFO: Node kali-worker is running more than one daemon pod May 1 15:17:37.892: INFO: Number of nodes with available pods: 1 May 1 15:17:37.893: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6452, will wait for the garbage collector to delete the pods May 1 15:17:37.970: INFO: Deleting DaemonSet.extensions daemon-set took: 19.452936ms May 1 15:17:38.271: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.231295ms May 1 15:17:54.214: INFO: Number of nodes with available pods: 0 May 1 15:17:54.214: INFO: Number of running nodes: 0, number of available pods: 0 May 1 15:17:54.220: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6452/daemonsets","resourceVersion":"651950"},"items":null} May 1 15:17:54.223: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6452/pods","resourceVersion":"651950"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:17:54.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6452" for this suite. • [SLOW TEST:36.635 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":18,"skipped":287,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:17:54.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 1 15:17:55.613: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a494b50-90fc-4489-9582-37ab5c0c6015" in namespace "downward-api-6554" to be "Succeeded or Failed" May 1 15:17:56.213: INFO: Pod "downwardapi-volume-7a494b50-90fc-4489-9582-37ab5c0c6015": Phase="Pending", Reason="", readiness=false. Elapsed: 599.492541ms May 1 15:17:58.217: INFO: Pod "downwardapi-volume-7a494b50-90fc-4489-9582-37ab5c0c6015": Phase="Pending", Reason="", readiness=false. Elapsed: 2.603709764s May 1 15:18:00.861: INFO: Pod "downwardapi-volume-7a494b50-90fc-4489-9582-37ab5c0c6015": Phase="Pending", Reason="", readiness=false. Elapsed: 5.247123568s May 1 15:18:03.282: INFO: Pod "downwardapi-volume-7a494b50-90fc-4489-9582-37ab5c0c6015": Phase="Pending", Reason="", readiness=false. Elapsed: 7.668792495s May 1 15:18:05.767: INFO: Pod "downwardapi-volume-7a494b50-90fc-4489-9582-37ab5c0c6015": Phase="Running", Reason="", readiness=true. Elapsed: 10.153165005s May 1 15:18:07.771: INFO: Pod "downwardapi-volume-7a494b50-90fc-4489-9582-37ab5c0c6015": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.157062277s STEP: Saw pod success May 1 15:18:07.771: INFO: Pod "downwardapi-volume-7a494b50-90fc-4489-9582-37ab5c0c6015" satisfied condition "Succeeded or Failed" May 1 15:18:07.773: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-7a494b50-90fc-4489-9582-37ab5c0c6015 container client-container: STEP: delete the pod May 1 15:18:08.099: INFO: Waiting for pod downwardapi-volume-7a494b50-90fc-4489-9582-37ab5c0c6015 to disappear May 1 15:18:08.225: INFO: Pod downwardapi-volume-7a494b50-90fc-4489-9582-37ab5c0c6015 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:18:08.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6554" for this suite. • [SLOW TEST:13.752 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":296,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:18:08.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:18:08.415: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:18:14.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-84" for this suite. • [SLOW TEST:6.432 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:18:14.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-a66c0315-7d62-44a4-ae9e-d005a8d186c3 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:18:28.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5507" for this suite. • [SLOW TEST:13.816 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":345,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:18:28.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs May 1 15:18:28.628: INFO: Waiting up to 5m0s for pod "pod-213a1c07-8759-4b80-a160-0fd11c70f0b3" in namespace "emptydir-2001" to be "Succeeded or Failed" May 1 15:18:28.655: INFO: Pod "pod-213a1c07-8759-4b80-a160-0fd11c70f0b3": Phase="Pending", Reason="", readiness=false. Elapsed: 27.56132ms May 1 15:18:31.061: INFO: Pod "pod-213a1c07-8759-4b80-a160-0fd11c70f0b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.433633618s May 1 15:18:33.800: INFO: Pod "pod-213a1c07-8759-4b80-a160-0fd11c70f0b3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.172465228s May 1 15:18:36.148: INFO: Pod "pod-213a1c07-8759-4b80-a160-0fd11c70f0b3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.520059459s May 1 15:18:39.286: INFO: Pod "pod-213a1c07-8759-4b80-a160-0fd11c70f0b3": Phase="Running", Reason="", readiness=true. Elapsed: 10.658126842s May 1 15:18:41.417: INFO: Pod "pod-213a1c07-8759-4b80-a160-0fd11c70f0b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.7893447s STEP: Saw pod success May 1 15:18:41.417: INFO: Pod "pod-213a1c07-8759-4b80-a160-0fd11c70f0b3" satisfied condition "Succeeded or Failed" May 1 15:18:41.937: INFO: Trying to get logs from node kali-worker pod pod-213a1c07-8759-4b80-a160-0fd11c70f0b3 container test-container: STEP: delete the pod May 1 15:18:44.040: INFO: Waiting for pod pod-213a1c07-8759-4b80-a160-0fd11c70f0b3 to disappear May 1 15:18:44.351: INFO: Pod pod-213a1c07-8759-4b80-a160-0fd11c70f0b3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:18:44.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2001" for this suite. • [SLOW TEST:16.100 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":347,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:18:44.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod May 1 15:18:53.544: INFO: Successfully updated pod "labelsupdatecc3a8c84-ffb6-477f-9bcf-0d1b676a4fdd" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:18:55.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-304" for this suite. • [SLOW TEST:11.184 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:18:55.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:18:56.268: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:18:57.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9307" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":24,"skipped":422,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:18:57.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9531 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet May 1 15:18:58.847: INFO: Found 0 stateful pods, waiting for 3 May 1 15:19:09.155: INFO: Found 2 stateful pods, waiting for 3 May 1 15:19:19.232: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 15:19:19.232: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 15:19:19.232: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 1 15:19:28.852: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 15:19:28.852: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 15:19:28.852: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 1 15:19:28.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9531 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 1 15:19:29.120: INFO: stderr: "I0501 15:19:28.975073 142 log.go:172] (0xc000b5c370) (0xc000900140) Create stream\nI0501 15:19:28.975122 142 log.go:172] (0xc000b5c370) (0xc000900140) Stream added, broadcasting: 1\nI0501 15:19:28.976858 142 log.go:172] (0xc000b5c370) Reply frame received for 1\nI0501 15:19:28.976886 142 log.go:172] (0xc000b5c370) (0xc000b1e000) Create stream\nI0501 15:19:28.976893 142 log.go:172] (0xc000b5c370) (0xc000b1e000) Stream added, broadcasting: 3\nI0501 15:19:28.977742 142 log.go:172] (0xc000b5c370) Reply frame received for 3\nI0501 15:19:28.977763 142 log.go:172] (0xc000b5c370) (0xc000be0320) Create stream\nI0501 15:19:28.977770 142 log.go:172] (0xc000b5c370) (0xc000be0320) Stream added, broadcasting: 5\nI0501 15:19:28.978418 142 log.go:172] (0xc000b5c370) Reply frame received for 5\nI0501 15:19:29.037926 142 log.go:172] (0xc000b5c370) Data frame received for 5\nI0501 15:19:29.037946 142 log.go:172] (0xc000be0320) (5) Data frame handling\nI0501 15:19:29.037956 142 log.go:172] (0xc000be0320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 15:19:29.114450 142 log.go:172] (0xc000b5c370) Data frame received for 3\nI0501 15:19:29.114572 142 log.go:172] (0xc000b1e000) (3) Data frame handling\nI0501 15:19:29.114607 142 log.go:172] (0xc000b1e000) (3) Data frame sent\nI0501 15:19:29.115480 142 log.go:172] (0xc000b5c370) Data frame received for 5\nI0501 15:19:29.115551 142 log.go:172] (0xc000be0320) (5) Data frame handling\nI0501 15:19:29.115594 142 log.go:172] (0xc000b5c370) Data frame received for 3\nI0501 15:19:29.115664 142 log.go:172] (0xc000b1e000) (3) Data frame handling\nI0501 15:19:29.116950 142 log.go:172] (0xc000b5c370) Data frame received for 1\nI0501 15:19:29.116985 142 log.go:172] (0xc000900140) (1) Data frame handling\nI0501 15:19:29.117008 142 log.go:172] (0xc000900140) (1) Data frame sent\nI0501 15:19:29.117028 142 log.go:172] (0xc000b5c370) (0xc000900140) Stream removed, broadcasting: 1\nI0501 15:19:29.117057 142 log.go:172] (0xc000b5c370) Go away received\nI0501 15:19:29.117374 142 log.go:172] (0xc000b5c370) (0xc000900140) Stream removed, broadcasting: 1\nI0501 15:19:29.117390 142 log.go:172] (0xc000b5c370) (0xc000b1e000) Stream removed, broadcasting: 3\nI0501 15:19:29.117398 142 log.go:172] (0xc000b5c370) (0xc000be0320) Stream removed, broadcasting: 5\n" May 1 15:19:29.120: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 1 15:19:29.120: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 1 15:19:39.184: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 1 15:19:50.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9531 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:19:50.635: INFO: stderr: "I0501 15:19:50.575766 164 log.go:172] (0xc00003a4d0) (0xc00040aaa0) Create stream\nI0501 15:19:50.575824 164 log.go:172] (0xc00003a4d0) (0xc00040aaa0) Stream added, broadcasting: 1\nI0501 15:19:50.577802 164 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0501 15:19:50.577833 164 log.go:172] (0xc00003a4d0) (0xc000695220) Create stream\nI0501 15:19:50.577844 164 log.go:172] (0xc00003a4d0) (0xc000695220) Stream added, broadcasting: 3\nI0501 15:19:50.578692 164 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0501 15:19:50.578735 164 log.go:172] (0xc00003a4d0) (0xc000bae000) Create stream\nI0501 15:19:50.578745 164 log.go:172] (0xc00003a4d0) (0xc000bae000) Stream added, broadcasting: 5\nI0501 15:19:50.579493 164 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0501 15:19:50.629527 164 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0501 15:19:50.629558 164 log.go:172] (0xc000bae000) (5) Data frame handling\nI0501 15:19:50.629568 164 log.go:172] (0xc000bae000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0501 15:19:50.629587 164 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0501 15:19:50.629608 164 log.go:172] (0xc000695220) (3) Data frame handling\nI0501 15:19:50.629630 164 log.go:172] (0xc000695220) (3) Data frame sent\nI0501 15:19:50.629644 164 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0501 15:19:50.629655 164 log.go:172] (0xc000695220) (3) Data frame handling\nI0501 15:19:50.629755 164 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0501 15:19:50.629772 164 log.go:172] (0xc000bae000) (5) Data frame handling\nI0501 15:19:50.630827 164 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0501 15:19:50.630843 164 log.go:172] (0xc00040aaa0) (1) Data frame handling\nI0501 15:19:50.630858 164 log.go:172] (0xc00040aaa0) (1) Data frame sent\nI0501 15:19:50.630873 164 log.go:172] (0xc00003a4d0) (0xc00040aaa0) Stream removed, broadcasting: 1\nI0501 15:19:50.630891 164 log.go:172] (0xc00003a4d0) Go away received\nI0501 15:19:50.631257 164 log.go:172] (0xc00003a4d0) (0xc00040aaa0) Stream removed, broadcasting: 1\nI0501 15:19:50.631280 164 log.go:172] (0xc00003a4d0) (0xc000695220) Stream removed, broadcasting: 3\nI0501 15:19:50.631292 164 log.go:172] (0xc00003a4d0) (0xc000bae000) Stream removed, broadcasting: 5\n" May 1 15:19:50.635: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 1 15:19:50.635: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 1 15:20:00.732: INFO: Waiting for StatefulSet statefulset-9531/ss2 to complete update May 1 15:20:00.732: INFO: Waiting for Pod statefulset-9531/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 1 15:20:00.732: INFO: Waiting for Pod statefulset-9531/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 1 15:20:11.451: INFO: Waiting for StatefulSet statefulset-9531/ss2 to complete update May 1 15:20:11.451: INFO: Waiting for Pod statefulset-9531/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 1 15:20:11.451: INFO: Waiting for Pod statefulset-9531/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 1 15:20:20.839: INFO: Waiting for StatefulSet statefulset-9531/ss2 to complete update May 1 15:20:20.839: INFO: Waiting for Pod statefulset-9531/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 1 15:20:30.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9531 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 1 15:20:31.018: INFO: stderr: "I0501 15:20:30.845536 185 log.go:172] (0xc00095c000) (0xc0002220a0) Create stream\nI0501 15:20:30.845591 185 log.go:172] (0xc00095c000) (0xc0002220a0) Stream added, broadcasting: 1\nI0501 15:20:30.847407 185 log.go:172] (0xc00095c000) Reply frame received for 1\nI0501 15:20:30.847441 185 log.go:172] (0xc00095c000) (0xc0004ca000) Create stream\nI0501 15:20:30.847455 185 log.go:172] (0xc00095c000) (0xc0004ca000) Stream added, broadcasting: 3\nI0501 15:20:30.848238 185 log.go:172] (0xc00095c000) Reply frame received for 3\nI0501 15:20:30.848272 185 log.go:172] (0xc00095c000) (0xc0004cab40) Create stream\nI0501 15:20:30.848289 185 log.go:172] (0xc00095c000) (0xc0004cab40) Stream added, broadcasting: 5\nI0501 15:20:30.849099 185 log.go:172] (0xc00095c000) Reply frame received for 5\nI0501 15:20:30.937701 185 log.go:172] (0xc00095c000) Data frame received for 5\nI0501 15:20:30.937724 185 log.go:172] (0xc0004cab40) (5) Data frame handling\nI0501 15:20:30.937738 185 log.go:172] (0xc0004cab40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 15:20:31.014817 185 log.go:172] (0xc00095c000) Data frame received for 5\nI0501 15:20:31.014842 185 log.go:172] (0xc0004cab40) (5) Data frame handling\nI0501 15:20:31.014872 185 log.go:172] (0xc00095c000) Data frame received for 3\nI0501 15:20:31.014890 185 log.go:172] (0xc0004ca000) (3) Data frame handling\nI0501 15:20:31.014907 185 log.go:172] (0xc0004ca000) (3) Data frame sent\nI0501 15:20:31.014918 185 log.go:172] (0xc00095c000) Data frame received for 3\nI0501 15:20:31.014928 185 log.go:172] (0xc0004ca000) (3) Data frame handling\nI0501 15:20:31.015846 185 log.go:172] (0xc00095c000) Data frame received for 1\nI0501 15:20:31.015888 185 log.go:172] (0xc0002220a0) (1) Data frame handling\nI0501 15:20:31.015919 185 log.go:172] (0xc0002220a0) (1) Data frame sent\nI0501 15:20:31.015949 185 log.go:172] (0xc00095c000) (0xc0002220a0) Stream removed, broadcasting: 1\nI0501 15:20:31.015988 185 log.go:172] (0xc00095c000) Go away received\nI0501 15:20:31.016177 185 log.go:172] (0xc00095c000) (0xc0002220a0) Stream removed, broadcasting: 1\nI0501 15:20:31.016193 185 log.go:172] (0xc00095c000) (0xc0004ca000) Stream removed, broadcasting: 3\nI0501 15:20:31.016207 185 log.go:172] (0xc00095c000) (0xc0004cab40) Stream removed, broadcasting: 5\n" May 1 15:20:31.018: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 1 15:20:31.018: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 1 15:20:41.046: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 1 15:20:51.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9531 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:20:51.516: INFO: stderr: "I0501 15:20:51.459514 199 log.go:172] (0xc000954790) (0xc000964be0) Create stream\nI0501 15:20:51.459555 199 log.go:172] (0xc000954790) (0xc000964be0) Stream added, broadcasting: 1\nI0501 15:20:51.461499 199 log.go:172] (0xc000954790) Reply frame received for 1\nI0501 15:20:51.461528 199 log.go:172] (0xc000954790) (0xc00092a000) Create stream\nI0501 15:20:51.461540 199 log.go:172] (0xc000954790) (0xc00092a000) Stream added, broadcasting: 3\nI0501 15:20:51.462113 199 log.go:172] (0xc000954790) Reply frame received for 3\nI0501 15:20:51.462136 199 log.go:172] (0xc000954790) (0xc000964c80) Create stream\nI0501 15:20:51.462141 199 log.go:172] (0xc000954790) (0xc000964c80) Stream added, broadcasting: 5\nI0501 15:20:51.462725 199 log.go:172] (0xc000954790) Reply frame received for 5\nI0501 15:20:51.511988 199 log.go:172] (0xc000954790) Data frame received for 3\nI0501 15:20:51.512011 199 log.go:172] (0xc00092a000) (3) Data frame handling\nI0501 15:20:51.512021 199 log.go:172] (0xc00092a000) (3) Data frame sent\nI0501 15:20:51.512027 199 log.go:172] (0xc000954790) Data frame received for 3\nI0501 15:20:51.512033 199 log.go:172] (0xc00092a000) (3) Data frame handling\nI0501 15:20:51.512057 199 log.go:172] (0xc000954790) Data frame received for 5\nI0501 15:20:51.512066 199 log.go:172] (0xc000964c80) (5) Data frame handling\nI0501 15:20:51.512078 199 log.go:172] (0xc000964c80) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0501 15:20:51.512086 199 log.go:172] (0xc000954790) Data frame received for 5\nI0501 15:20:51.512137 199 log.go:172] (0xc000964c80) (5) Data frame handling\nI0501 15:20:51.512912 199 log.go:172] (0xc000954790) Data frame received for 1\nI0501 15:20:51.512933 199 log.go:172] (0xc000964be0) (1) Data frame handling\nI0501 15:20:51.512954 199 log.go:172] (0xc000964be0) (1) Data frame sent\nI0501 15:20:51.512972 199 log.go:172] (0xc000954790) (0xc000964be0) Stream removed, broadcasting: 1\nI0501 15:20:51.512989 199 log.go:172] (0xc000954790) Go away received\nI0501 15:20:51.513433 199 log.go:172] (0xc000954790) (0xc000964be0) Stream removed, broadcasting: 1\nI0501 15:20:51.513451 199 log.go:172] (0xc000954790) (0xc00092a000) Stream removed, broadcasting: 3\nI0501 15:20:51.513463 199 log.go:172] (0xc000954790) (0xc000964c80) Stream removed, broadcasting: 5\n" May 1 15:20:51.516: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 1 15:20:51.516: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 1 15:21:01.606: INFO: Waiting for StatefulSet statefulset-9531/ss2 to complete update May 1 15:21:01.606: INFO: Waiting for Pod statefulset-9531/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 1 15:21:01.606: INFO: Waiting for Pod statefulset-9531/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 1 15:21:01.606: INFO: Waiting for Pod statefulset-9531/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 1 15:21:11.692: INFO: Waiting for StatefulSet statefulset-9531/ss2 to complete update May 1 15:21:11.692: INFO: Waiting for Pod statefulset-9531/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 1 15:21:11.692: INFO: Waiting for Pod statefulset-9531/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 1 15:21:21.612: INFO: Waiting for StatefulSet statefulset-9531/ss2 to complete update May 1 15:21:21.612: INFO: Waiting for Pod statefulset-9531/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 1 15:21:21.612: INFO: Waiting for Pod statefulset-9531/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 1 15:21:31.612: INFO: Waiting for StatefulSet statefulset-9531/ss2 to complete update May 1 15:21:31.612: INFO: Waiting for Pod statefulset-9531/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 1 15:21:41.615: INFO: Waiting for StatefulSet statefulset-9531/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 1 15:21:51.613: INFO: Deleting all statefulset in ns statefulset-9531 May 1 15:21:51.616: INFO: Scaling statefulset ss2 to 0 May 1 15:22:21.636: INFO: Waiting for statefulset status.replicas updated to 0 May 1 15:22:21.638: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:22:22.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9531" for this suite. • [SLOW TEST:205.679 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":25,"skipped":434,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:22:23.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-1930/secret-test-762b930e-bda5-4f8b-8550-3439257a446c STEP: Creating a pod to test consume secrets May 1 15:22:23.401: INFO: Waiting up to 5m0s for pod "pod-configmaps-476fbe9d-b7a7-402d-86bf-44bd65a4fa06" in namespace "secrets-1930" to be "Succeeded or Failed" May 1 15:22:23.536: INFO: Pod "pod-configmaps-476fbe9d-b7a7-402d-86bf-44bd65a4fa06": Phase="Pending", Reason="", readiness=false. Elapsed: 134.844863ms May 1 15:22:25.539: INFO: Pod "pod-configmaps-476fbe9d-b7a7-402d-86bf-44bd65a4fa06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137589623s May 1 15:22:27.718: INFO: Pod "pod-configmaps-476fbe9d-b7a7-402d-86bf-44bd65a4fa06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316746819s May 1 15:22:30.816: INFO: Pod "pod-configmaps-476fbe9d-b7a7-402d-86bf-44bd65a4fa06": Phase="Pending", Reason="", readiness=false. Elapsed: 7.414765293s May 1 15:22:32.820: INFO: Pod "pod-configmaps-476fbe9d-b7a7-402d-86bf-44bd65a4fa06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.418518648s STEP: Saw pod success May 1 15:22:32.820: INFO: Pod "pod-configmaps-476fbe9d-b7a7-402d-86bf-44bd65a4fa06" satisfied condition "Succeeded or Failed" May 1 15:22:32.822: INFO: Trying to get logs from node kali-worker pod pod-configmaps-476fbe9d-b7a7-402d-86bf-44bd65a4fa06 container env-test: STEP: delete the pod May 1 15:22:32.862: INFO: Waiting for pod pod-configmaps-476fbe9d-b7a7-402d-86bf-44bd65a4fa06 to disappear May 1 15:22:32.866: INFO: Pod pod-configmaps-476fbe9d-b7a7-402d-86bf-44bd65a4fa06 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:22:32.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1930" for this suite. • [SLOW TEST:9.721 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":26,"skipped":447,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:22:32.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 May 1 15:22:32.950: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 1 15:22:32.969: INFO: Waiting for terminating namespaces to be deleted... May 1 15:22:32.971: INFO: Logging pods the kubelet thinks is on node kali-worker before test May 1 15:22:32.976: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 1 15:22:32.976: INFO: Container kindnet-cni ready: true, restart count 1 May 1 15:22:32.976: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 1 15:22:32.976: INFO: Container kube-proxy ready: true, restart count 0 May 1 15:22:32.976: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test May 1 15:22:32.993: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 1 15:22:32.993: INFO: Container kindnet-cni ready: true, restart count 0 May 1 15:22:32.993: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 1 15:22:32.993: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160af0c64e9263be], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:22:34.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1796" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":27,"skipped":448,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:22:34.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args May 1 15:22:34.203: INFO: Waiting up to 5m0s for pod "var-expansion-9942d3e0-779a-49c5-9865-9cb0fba712c0" in namespace "var-expansion-4659" to be "Succeeded or Failed" May 1 15:22:34.207: INFO: Pod "var-expansion-9942d3e0-779a-49c5-9865-9cb0fba712c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333858ms May 1 15:22:36.212: INFO: Pod "var-expansion-9942d3e0-779a-49c5-9865-9cb0fba712c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008693271s May 1 15:22:38.491: INFO: Pod "var-expansion-9942d3e0-779a-49c5-9865-9cb0fba712c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287801306s May 1 15:22:40.495: INFO: Pod "var-expansion-9942d3e0-779a-49c5-9865-9cb0fba712c0": Phase="Running", Reason="", readiness=true. Elapsed: 6.29234081s May 1 15:22:42.499: INFO: Pod "var-expansion-9942d3e0-779a-49c5-9865-9cb0fba712c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.295975408s STEP: Saw pod success May 1 15:22:42.499: INFO: Pod "var-expansion-9942d3e0-779a-49c5-9865-9cb0fba712c0" satisfied condition "Succeeded or Failed" May 1 15:22:42.502: INFO: Trying to get logs from node kali-worker2 pod var-expansion-9942d3e0-779a-49c5-9865-9cb0fba712c0 container dapi-container: STEP: delete the pod May 1 15:22:42.534: INFO: Waiting for pod var-expansion-9942d3e0-779a-49c5-9865-9cb0fba712c0 to disappear May 1 15:22:42.540: INFO: Pod var-expansion-9942d3e0-779a-49c5-9865-9cb0fba712c0 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:22:42.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4659" for this suite. • [SLOW TEST:8.440 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":452,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:22:42.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:22:42.638: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5f22f7b6-09ac-4fdc-9b97-8219f2921458", Controller:(*bool)(0xc0028444b2), BlockOwnerDeletion:(*bool)(0xc0028444b3)}} May 1 15:22:42.660: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"959fa761-7307-4a5a-beed-aeb95cf5646e", Controller:(*bool)(0xc000fc624a), BlockOwnerDeletion:(*bool)(0xc000fc624b)}} May 1 15:22:42.684: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"cb3139bb-2b8c-46f5-856f-f29f2743e8ec", Controller:(*bool)(0xc000fc6442), BlockOwnerDeletion:(*bool)(0xc000fc6443)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:22:48.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7619" for this suite. • [SLOW TEST:5.546 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":29,"skipped":515,"failed":0} [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:22:48.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 1 15:22:48.185: INFO: Waiting up to 5m0s for pod "downward-api-699aef5b-cfa2-4f7b-a746-71868bc88e5e" in namespace "downward-api-9459" to be "Succeeded or Failed" May 1 15:22:48.233: INFO: Pod "downward-api-699aef5b-cfa2-4f7b-a746-71868bc88e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 48.321998ms May 1 15:22:50.280: INFO: Pod "downward-api-699aef5b-cfa2-4f7b-a746-71868bc88e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095473126s May 1 15:22:52.291: INFO: Pod "downward-api-699aef5b-cfa2-4f7b-a746-71868bc88e5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105965995s STEP: Saw pod success May 1 15:22:52.291: INFO: Pod "downward-api-699aef5b-cfa2-4f7b-a746-71868bc88e5e" satisfied condition "Succeeded or Failed" May 1 15:22:52.304: INFO: Trying to get logs from node kali-worker2 pod downward-api-699aef5b-cfa2-4f7b-a746-71868bc88e5e container dapi-container: STEP: delete the pod May 1 15:22:52.376: INFO: Waiting for pod downward-api-699aef5b-cfa2-4f7b-a746-71868bc88e5e to disappear May 1 15:22:52.418: INFO: Pod downward-api-699aef5b-cfa2-4f7b-a746-71868bc88e5e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:22:52.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9459" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":515,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:22:52.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 1 15:22:52.534: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 1 15:22:52.538: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 1 15:22:52.538: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 1 15:22:52.586: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 1 15:22:52.586: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 1 15:22:52.615: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 1 15:22:52.615: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 1 15:23:02.296: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:23:03.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-392" for this suite. • [SLOW TEST:11.940 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":31,"skipped":551,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:23:04.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted May 1 15:23:34.271: INFO: 10 pods remaining May 1 15:23:34.271: INFO: 5 pods has nil DeletionTimestamp May 1 15:23:34.271: INFO: STEP: Gathering metrics W0501 15:23:35.574155 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 15:23:35.574: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:23:35.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-359" for this suite. • [SLOW TEST:31.208 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":32,"skipped":566,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:23:35.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC May 1 15:23:37.793: INFO: namespace kubectl-1914 May 1 15:23:37.793: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1914' May 1 15:23:41.261: INFO: stderr: "" May 1 15:23:41.261: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 1 15:23:42.695: INFO: Selector matched 1 pods for map[app:agnhost] May 1 15:23:42.695: INFO: Found 0 / 1 May 1 15:23:44.497: INFO: Selector matched 1 pods for map[app:agnhost] May 1 15:23:44.497: INFO: Found 0 / 1 May 1 15:23:46.174: INFO: Selector matched 1 pods for map[app:agnhost] May 1 15:23:46.174: INFO: Found 0 / 1 May 1 15:23:46.830: INFO: Selector matched 1 pods for map[app:agnhost] May 1 15:23:46.831: INFO: Found 0 / 1 May 1 15:23:47.469: INFO: Selector matched 1 pods for map[app:agnhost] May 1 15:23:47.469: INFO: Found 0 / 1 May 1 15:23:48.450: INFO: Selector matched 1 pods for map[app:agnhost] May 1 15:23:48.450: INFO: Found 0 / 1 May 1 15:23:49.602: INFO: Selector matched 1 pods for map[app:agnhost] May 1 15:23:49.602: INFO: Found 0 / 1 May 1 15:23:50.916: INFO: Selector matched 1 pods for map[app:agnhost] May 1 15:23:50.916: INFO: Found 1 / 1 May 1 15:23:50.916: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 1 15:23:50.946: INFO: Selector matched 1 pods for map[app:agnhost] May 1 15:23:50.946: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 1 15:23:50.946: INFO: wait on agnhost-master startup in kubectl-1914 May 1 15:23:50.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs agnhost-master-6fgdb agnhost-master --namespace=kubectl-1914' May 1 15:23:51.152: INFO: stderr: "" May 1 15:23:51.152: INFO: stdout: "Paused\n" STEP: exposing RC May 1 15:23:51.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1914' May 1 15:23:51.598: INFO: stderr: "" May 1 15:23:51.598: INFO: stdout: "service/rm2 exposed\n" May 1 15:23:51.719: INFO: Service rm2 in namespace kubectl-1914 found. STEP: exposing service May 1 15:23:55.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1914' May 1 15:23:56.742: INFO: stderr: "" May 1 15:23:56.742: INFO: stdout: "service/rm3 exposed\n" May 1 15:23:57.348: INFO: Service rm3 in namespace kubectl-1914 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:23:59.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1914" for this suite. • [SLOW TEST:23.785 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":33,"skipped":578,"failed":0} S ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:23:59.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:24:10.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2592" for this suite. • [SLOW TEST:11.556 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":34,"skipped":579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:24:10.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 15:24:14.120: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 15:24:16.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943454, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943454, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943454, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943453, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:24:18.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943454, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943454, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943454, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943453, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:24:20.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943454, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943454, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943454, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723943453, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 15:24:23.399: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:24:23.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9751-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:24:27.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7878" for this suite. STEP: Destroying namespace "webhook-7878-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.474 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":35,"skipped":610,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:24:27.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-45011ac7-bee3-4b45-b453-066d1ec7e153 STEP: Creating a pod to test consume secrets May 1 15:24:28.684: INFO: Waiting up to 5m0s for pod "pod-secrets-6cdd818d-52a7-47c4-a6f4-8371ca2b0fbb" in namespace "secrets-5526" to be "Succeeded or Failed" May 1 15:24:28.893: INFO: Pod "pod-secrets-6cdd818d-52a7-47c4-a6f4-8371ca2b0fbb": Phase="Pending", Reason="", readiness=false. Elapsed: 209.495704ms May 1 15:24:30.898: INFO: Pod "pod-secrets-6cdd818d-52a7-47c4-a6f4-8371ca2b0fbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214214682s May 1 15:24:32.971: INFO: Pod "pod-secrets-6cdd818d-52a7-47c4-a6f4-8371ca2b0fbb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287238971s May 1 15:24:34.982: INFO: Pod "pod-secrets-6cdd818d-52a7-47c4-a6f4-8371ca2b0fbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.298592114s STEP: Saw pod success May 1 15:24:34.982: INFO: Pod "pod-secrets-6cdd818d-52a7-47c4-a6f4-8371ca2b0fbb" satisfied condition "Succeeded or Failed" May 1 15:24:34.985: INFO: Trying to get logs from node kali-worker pod pod-secrets-6cdd818d-52a7-47c4-a6f4-8371ca2b0fbb container secret-volume-test: STEP: delete the pod May 1 15:24:35.803: INFO: Waiting for pod pod-secrets-6cdd818d-52a7-47c4-a6f4-8371ca2b0fbb to disappear May 1 15:24:35.841: INFO: Pod pod-secrets-6cdd818d-52a7-47c4-a6f4-8371ca2b0fbb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:24:35.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5526" for this suite. • [SLOW TEST:8.454 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":36,"skipped":611,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:24:35.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod May 1 15:24:36.622: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:24:49.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-280" for this suite. • [SLOW TEST:13.218 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":37,"skipped":614,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:24:49.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:24:49.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8021" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":38,"skipped":637,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:24:49.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 1 15:24:59.639: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:25:00.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6646" for this suite. • [SLOW TEST:11.405 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":649,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:25:00.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-8c39ddf7-6a22-476d-8648-039c9455d00f in namespace container-probe-393 May 1 15:25:12.685: INFO: Started pod liveness-8c39ddf7-6a22-476d-8648-039c9455d00f in namespace container-probe-393 STEP: checking the pod's current state and verifying that restartCount is present May 1 15:25:13.097: INFO: Initial restart count of pod liveness-8c39ddf7-6a22-476d-8648-039c9455d00f is 0 May 1 15:25:25.902: INFO: Restart count of pod container-probe-393/liveness-8c39ddf7-6a22-476d-8648-039c9455d00f is now 1 (12.804773395s elapsed) May 1 15:25:48.192: INFO: Restart count of pod container-probe-393/liveness-8c39ddf7-6a22-476d-8648-039c9455d00f is now 2 (35.09481486s elapsed) May 1 15:26:04.257: INFO: Restart count of pod container-probe-393/liveness-8c39ddf7-6a22-476d-8648-039c9455d00f is now 3 (51.159603899s elapsed) May 1 15:26:24.312: INFO: Restart count of pod container-probe-393/liveness-8c39ddf7-6a22-476d-8648-039c9455d00f is now 4 (1m11.214844598s elapsed) May 1 15:27:27.184: INFO: Restart count of pod container-probe-393/liveness-8c39ddf7-6a22-476d-8648-039c9455d00f is now 5 (2m14.086414945s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:27:28.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-393" for this suite. • [SLOW TEST:147.393 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":658,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:27:28.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments May 1 15:27:29.190: INFO: Waiting up to 5m0s for pod "client-containers-c6aaf192-19ed-40a9-984e-0c79764da2ab" in namespace "containers-2462" to be "Succeeded or Failed" May 1 15:27:29.257: INFO: Pod "client-containers-c6aaf192-19ed-40a9-984e-0c79764da2ab": Phase="Pending", Reason="", readiness=false. Elapsed: 67.019141ms May 1 15:27:31.367: INFO: Pod "client-containers-c6aaf192-19ed-40a9-984e-0c79764da2ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177049437s May 1 15:27:33.571: INFO: Pod "client-containers-c6aaf192-19ed-40a9-984e-0c79764da2ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.380672913s May 1 15:27:35.575: INFO: Pod "client-containers-c6aaf192-19ed-40a9-984e-0c79764da2ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.385093462s STEP: Saw pod success May 1 15:27:35.576: INFO: Pod "client-containers-c6aaf192-19ed-40a9-984e-0c79764da2ab" satisfied condition "Succeeded or Failed" May 1 15:27:35.579: INFO: Trying to get logs from node kali-worker pod client-containers-c6aaf192-19ed-40a9-984e-0c79764da2ab container test-container: STEP: delete the pod May 1 15:27:35.860: INFO: Waiting for pod client-containers-c6aaf192-19ed-40a9-984e-0c79764da2ab to disappear May 1 15:27:35.866: INFO: Pod client-containers-c6aaf192-19ed-40a9-984e-0c79764da2ab no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:27:35.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2462" for this suite. • [SLOW TEST:7.563 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":674,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:27:35.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:27:36.021: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:27:37.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9279" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":42,"skipped":675,"failed":0} SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:27:37.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token May 1 15:27:37.963: INFO: created pod pod-service-account-defaultsa May 1 15:27:37.963: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 1 15:27:37.975: INFO: created pod pod-service-account-mountsa May 1 15:27:37.975: INFO: pod pod-service-account-mountsa service account token volume mount: true May 1 15:27:38.020: INFO: created pod pod-service-account-nomountsa May 1 15:27:38.020: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 1 15:27:38.046: INFO: created pod pod-service-account-defaultsa-mountspec May 1 15:27:38.046: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 1 15:27:38.151: INFO: created pod pod-service-account-mountsa-mountspec May 1 15:27:38.151: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 1 15:27:38.176: INFO: created pod pod-service-account-nomountsa-mountspec May 1 15:27:38.176: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 1 15:27:38.203: INFO: created pod pod-service-account-defaultsa-nomountspec May 1 15:27:38.203: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 1 15:27:38.296: INFO: created pod pod-service-account-mountsa-nomountspec May 1 15:27:38.296: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 1 15:27:39.080: INFO: created pod pod-service-account-nomountsa-nomountspec May 1 15:27:39.080: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:27:39.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4688" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":43,"skipped":682,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:27:40.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7412 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-7412 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7412 May 1 15:27:42.174: INFO: Found 0 stateful pods, waiting for 1 May 1 15:27:52.272: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false May 1 15:28:02.179: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 1 15:28:02.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 1 15:28:05.889: INFO: stderr: "I0501 15:28:05.790808 281 log.go:172] (0xc0007be000) (0xc0007c0000) Create stream\nI0501 15:28:05.790846 281 log.go:172] (0xc0007be000) (0xc0007c0000) Stream added, broadcasting: 1\nI0501 15:28:05.794055 281 log.go:172] (0xc0007be000) Reply frame received for 1\nI0501 15:28:05.794084 281 log.go:172] (0xc0007be000) (0xc000806000) Create stream\nI0501 15:28:05.794092 281 log.go:172] (0xc0007be000) (0xc000806000) Stream added, broadcasting: 3\nI0501 15:28:05.795118 281 log.go:172] (0xc0007be000) Reply frame received for 3\nI0501 15:28:05.795164 281 log.go:172] (0xc0007be000) (0xc0007c00a0) Create stream\nI0501 15:28:05.795180 281 log.go:172] (0xc0007be000) (0xc0007c00a0) Stream added, broadcasting: 5\nI0501 15:28:05.796094 281 log.go:172] (0xc0007be000) Reply frame received for 5\nI0501 15:28:05.852841 281 log.go:172] (0xc0007be000) Data frame received for 5\nI0501 15:28:05.852873 281 log.go:172] (0xc0007c00a0) (5) Data frame handling\nI0501 15:28:05.852899 281 log.go:172] (0xc0007c00a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 15:28:05.880158 281 log.go:172] (0xc0007be000) Data frame received for 5\nI0501 15:28:05.880196 281 log.go:172] (0xc0007be000) Data frame received for 3\nI0501 15:28:05.880217 281 log.go:172] (0xc000806000) (3) Data frame handling\nI0501 15:28:05.880229 281 log.go:172] (0xc000806000) (3) Data frame sent\nI0501 15:28:05.880236 281 log.go:172] (0xc0007be000) Data frame received for 3\nI0501 15:28:05.880242 281 log.go:172] (0xc000806000) (3) Data frame handling\nI0501 15:28:05.880273 281 log.go:172] (0xc0007c00a0) (5) Data frame handling\nI0501 15:28:05.882107 281 log.go:172] (0xc0007be000) Data frame received for 1\nI0501 15:28:05.882125 281 log.go:172] (0xc0007c0000) (1) Data frame handling\nI0501 15:28:05.882141 281 log.go:172] (0xc0007c0000) (1) Data frame sent\nI0501 15:28:05.882154 281 log.go:172] (0xc0007be000) (0xc0007c0000) Stream removed, broadcasting: 1\nI0501 15:28:05.882165 281 log.go:172] (0xc0007be000) Go away received\nI0501 15:28:05.882651 281 log.go:172] (0xc0007be000) (0xc0007c0000) Stream removed, broadcasting: 1\nI0501 15:28:05.882680 281 log.go:172] (0xc0007be000) (0xc000806000) Stream removed, broadcasting: 3\nI0501 15:28:05.882695 281 log.go:172] (0xc0007be000) (0xc0007c00a0) Stream removed, broadcasting: 5\n" May 1 15:28:05.889: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 1 15:28:05.889: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 1 15:28:05.893: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 1 15:28:15.898: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 1 15:28:15.898: INFO: Waiting for statefulset status.replicas updated to 0 May 1 15:28:16.092: INFO: POD NODE PHASE GRACE CONDITIONS May 1 15:28:16.092: INFO: ss-0 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:42 +0000 UTC }] May 1 15:28:16.092: INFO: May 1 15:28:16.092: INFO: StatefulSet ss has not reached scale 3, at 1 May 1 15:28:17.097: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.814823882s May 1 15:28:18.102: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.809882517s May 1 15:28:19.224: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.804750907s May 1 15:28:20.259: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.682755739s May 1 15:28:21.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.648356488s May 1 15:28:22.269: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.643419927s May 1 15:28:23.290: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.637741855s May 1 15:28:24.294: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.617079222s May 1 15:28:25.368: INFO: Verifying statefulset ss doesn't scale past 3 for another 612.730423ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7412 May 1 15:28:26.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:28:26.579: INFO: stderr: "I0501 15:28:26.496222 311 log.go:172] (0xc000936160) (0xc0009740a0) Create stream\nI0501 15:28:26.496279 311 log.go:172] (0xc000936160) (0xc0009740a0) Stream added, broadcasting: 1\nI0501 15:28:26.498904 311 log.go:172] (0xc000936160) Reply frame received for 1\nI0501 15:28:26.498935 311 log.go:172] (0xc000936160) (0xc0008ba000) Create stream\nI0501 15:28:26.498944 311 log.go:172] (0xc000936160) (0xc0008ba000) Stream added, broadcasting: 3\nI0501 15:28:26.499778 311 log.go:172] (0xc000936160) Reply frame received for 3\nI0501 15:28:26.499815 311 log.go:172] (0xc000936160) (0xc0007f52c0) Create stream\nI0501 15:28:26.499830 311 log.go:172] (0xc000936160) (0xc0007f52c0) Stream added, broadcasting: 5\nI0501 15:28:26.500689 311 log.go:172] (0xc000936160) Reply frame received for 5\nI0501 15:28:26.571661 311 log.go:172] (0xc000936160) Data frame received for 3\nI0501 15:28:26.571682 311 log.go:172] (0xc0008ba000) (3) Data frame handling\nI0501 15:28:26.571698 311 log.go:172] (0xc0008ba000) (3) Data frame sent\nI0501 15:28:26.571703 311 log.go:172] (0xc000936160) Data frame received for 3\nI0501 15:28:26.571708 311 log.go:172] (0xc0008ba000) (3) Data frame handling\nI0501 15:28:26.571947 311 log.go:172] (0xc000936160) Data frame received for 5\nI0501 15:28:26.571976 311 log.go:172] (0xc0007f52c0) (5) Data frame handling\nI0501 15:28:26.572001 311 log.go:172] (0xc0007f52c0) (5) Data frame sent\nI0501 15:28:26.572012 311 log.go:172] (0xc000936160) Data frame received for 5\nI0501 15:28:26.572025 311 log.go:172] (0xc0007f52c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0501 15:28:26.574013 311 log.go:172] (0xc000936160) Data frame received for 1\nI0501 15:28:26.574037 311 log.go:172] (0xc0009740a0) (1) Data frame handling\nI0501 15:28:26.574050 311 log.go:172] (0xc0009740a0) (1) Data frame sent\nI0501 15:28:26.574064 311 log.go:172] (0xc000936160) (0xc0009740a0) Stream removed, broadcasting: 1\nI0501 15:28:26.574081 311 log.go:172] (0xc000936160) Go away received\nI0501 15:28:26.574567 311 log.go:172] (0xc000936160) (0xc0009740a0) Stream removed, broadcasting: 1\nI0501 15:28:26.574592 311 log.go:172] (0xc000936160) (0xc0008ba000) Stream removed, broadcasting: 3\nI0501 15:28:26.574605 311 log.go:172] (0xc000936160) (0xc0007f52c0) Stream removed, broadcasting: 5\n" May 1 15:28:26.579: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 1 15:28:26.579: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 1 15:28:26.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:28:26.799: INFO: stderr: "I0501 15:28:26.717458 334 log.go:172] (0xc0009a13f0) (0xc0009986e0) Create stream\nI0501 15:28:26.717507 334 log.go:172] (0xc0009a13f0) (0xc0009986e0) Stream added, broadcasting: 1\nI0501 15:28:26.721974 334 log.go:172] (0xc0009a13f0) Reply frame received for 1\nI0501 15:28:26.722027 334 log.go:172] (0xc0009a13f0) (0xc000691720) Create stream\nI0501 15:28:26.722051 334 log.go:172] (0xc0009a13f0) (0xc000691720) Stream added, broadcasting: 3\nI0501 15:28:26.722993 334 log.go:172] (0xc0009a13f0) Reply frame received for 3\nI0501 15:28:26.723027 334 log.go:172] (0xc0009a13f0) (0xc000456b40) Create stream\nI0501 15:28:26.723038 334 log.go:172] (0xc0009a13f0) (0xc000456b40) Stream added, broadcasting: 5\nI0501 15:28:26.723932 334 log.go:172] (0xc0009a13f0) Reply frame received for 5\nI0501 15:28:26.791813 334 log.go:172] (0xc0009a13f0) Data frame received for 5\nI0501 15:28:26.791852 334 log.go:172] (0xc000456b40) (5) Data frame handling\nI0501 15:28:26.791873 334 log.go:172] (0xc000456b40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0501 15:28:26.791913 334 log.go:172] (0xc0009a13f0) Data frame received for 3\nI0501 15:28:26.791934 334 log.go:172] (0xc000691720) (3) Data frame handling\nI0501 15:28:26.791946 334 log.go:172] (0xc000691720) (3) Data frame sent\nI0501 15:28:26.791956 334 log.go:172] (0xc0009a13f0) Data frame received for 3\nI0501 15:28:26.791967 334 log.go:172] (0xc000691720) (3) Data frame handling\nI0501 15:28:26.792110 334 log.go:172] (0xc0009a13f0) Data frame received for 5\nI0501 15:28:26.792128 334 log.go:172] (0xc000456b40) (5) Data frame handling\nI0501 15:28:26.794444 334 log.go:172] (0xc0009a13f0) Data frame received for 1\nI0501 15:28:26.794471 334 log.go:172] (0xc0009986e0) (1) Data frame handling\nI0501 15:28:26.794483 334 log.go:172] (0xc0009986e0) (1) Data frame sent\nI0501 15:28:26.794498 334 log.go:172] (0xc0009a13f0) (0xc0009986e0) Stream removed, broadcasting: 1\nI0501 15:28:26.794594 334 log.go:172] (0xc0009a13f0) Go away received\nI0501 15:28:26.794884 334 log.go:172] (0xc0009a13f0) (0xc0009986e0) Stream removed, broadcasting: 1\nI0501 15:28:26.794903 334 log.go:172] (0xc0009a13f0) (0xc000691720) Stream removed, broadcasting: 3\nI0501 15:28:26.794915 334 log.go:172] (0xc0009a13f0) (0xc000456b40) Stream removed, broadcasting: 5\n" May 1 15:28:26.799: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 1 15:28:26.799: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 1 15:28:26.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:28:26.984: INFO: stderr: "I0501 15:28:26.918193 354 log.go:172] (0xc000ad60b0) (0xc00054cbe0) Create stream\nI0501 15:28:26.918252 354 log.go:172] (0xc000ad60b0) (0xc00054cbe0) Stream added, broadcasting: 1\nI0501 15:28:26.923185 354 log.go:172] (0xc000ad60b0) Reply frame received for 1\nI0501 15:28:26.923250 354 log.go:172] (0xc000ad60b0) (0xc000a5c000) Create stream\nI0501 15:28:26.923269 354 log.go:172] (0xc000ad60b0) (0xc000a5c000) Stream added, broadcasting: 3\nI0501 15:28:26.926802 354 log.go:172] (0xc000ad60b0) Reply frame received for 3\nI0501 15:28:26.926831 354 log.go:172] (0xc000ad60b0) (0xc0007c7360) Create stream\nI0501 15:28:26.926844 354 log.go:172] (0xc000ad60b0) (0xc0007c7360) Stream added, broadcasting: 5\nI0501 15:28:26.927621 354 log.go:172] (0xc000ad60b0) Reply frame received for 5\nI0501 15:28:26.978978 354 log.go:172] (0xc000ad60b0) Data frame received for 5\nI0501 15:28:26.979012 354 log.go:172] (0xc0007c7360) (5) Data frame handling\nI0501 15:28:26.979021 354 log.go:172] (0xc0007c7360) (5) Data frame sent\nI0501 15:28:26.979027 354 log.go:172] (0xc000ad60b0) Data frame received for 5\nI0501 15:28:26.979031 354 log.go:172] (0xc0007c7360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0501 15:28:26.979047 354 log.go:172] (0xc000ad60b0) Data frame received for 3\nI0501 15:28:26.979053 354 log.go:172] (0xc000a5c000) (3) Data frame handling\nI0501 15:28:26.979066 354 log.go:172] (0xc000a5c000) (3) Data frame sent\nI0501 15:28:26.979075 354 log.go:172] (0xc000ad60b0) Data frame received for 3\nI0501 15:28:26.979082 354 log.go:172] (0xc000a5c000) (3) Data frame handling\nI0501 15:28:26.980260 354 log.go:172] (0xc000ad60b0) Data frame received for 1\nI0501 15:28:26.980278 354 log.go:172] (0xc00054cbe0) (1) Data frame handling\nI0501 15:28:26.980290 354 log.go:172] (0xc00054cbe0) (1) Data frame sent\nI0501 15:28:26.980307 354 log.go:172] (0xc000ad60b0) (0xc00054cbe0) Stream removed, broadcasting: 1\nI0501 15:28:26.980338 354 log.go:172] (0xc000ad60b0) Go away received\nI0501 15:28:26.980591 354 log.go:172] (0xc000ad60b0) (0xc00054cbe0) Stream removed, broadcasting: 1\nI0501 15:28:26.980605 354 log.go:172] (0xc000ad60b0) (0xc000a5c000) Stream removed, broadcasting: 3\nI0501 15:28:26.980611 354 log.go:172] (0xc000ad60b0) (0xc0007c7360) Stream removed, broadcasting: 5\n" May 1 15:28:26.984: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 1 15:28:26.984: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 1 15:28:26.987: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 1 15:28:36.993: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 1 15:28:36.993: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 1 15:28:36.993: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 1 15:28:36.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 1 15:28:37.219: INFO: stderr: "I0501 15:28:37.134205 374 log.go:172] (0xc0003cbef0) (0xc0009fc140) Create stream\nI0501 15:28:37.134267 374 log.go:172] (0xc0003cbef0) (0xc0009fc140) Stream added, broadcasting: 1\nI0501 15:28:37.136635 374 log.go:172] (0xc0003cbef0) Reply frame received for 1\nI0501 15:28:37.136694 374 log.go:172] (0xc0003cbef0) (0xc0006772c0) Create stream\nI0501 15:28:37.136713 374 log.go:172] (0xc0003cbef0) (0xc0006772c0) Stream added, broadcasting: 3\nI0501 15:28:37.137835 374 log.go:172] (0xc0003cbef0) Reply frame received for 3\nI0501 15:28:37.137875 374 log.go:172] (0xc0003cbef0) (0xc0009fc280) Create stream\nI0501 15:28:37.137889 374 log.go:172] (0xc0003cbef0) (0xc0009fc280) Stream added, broadcasting: 5\nI0501 15:28:37.138933 374 log.go:172] (0xc0003cbef0) Reply frame received for 5\nI0501 15:28:37.206194 374 log.go:172] (0xc0003cbef0) Data frame received for 5\nI0501 15:28:37.206255 374 log.go:172] (0xc0003cbef0) Data frame received for 3\nI0501 15:28:37.206303 374 log.go:172] (0xc0006772c0) (3) Data frame handling\nI0501 15:28:37.206318 374 log.go:172] (0xc0006772c0) (3) Data frame sent\nI0501 15:28:37.206325 374 log.go:172] (0xc0003cbef0) Data frame received for 3\nI0501 15:28:37.206332 374 log.go:172] (0xc0006772c0) (3) Data frame handling\nI0501 15:28:37.206365 374 log.go:172] (0xc0009fc280) (5) Data frame handling\nI0501 15:28:37.206382 374 log.go:172] (0xc0009fc280) (5) Data frame sent\nI0501 15:28:37.206390 374 log.go:172] (0xc0003cbef0) Data frame received for 5\nI0501 15:28:37.206395 374 log.go:172] (0xc0009fc280) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 15:28:37.207881 374 log.go:172] (0xc0003cbef0) Data frame received for 1\nI0501 15:28:37.207972 374 log.go:172] (0xc0009fc140) (1) Data frame handling\nI0501 15:28:37.208018 374 log.go:172] (0xc0009fc140) (1) Data frame sent\nI0501 15:28:37.208098 374 log.go:172] (0xc0003cbef0) (0xc0009fc140) Stream removed, broadcasting: 1\nI0501 15:28:37.208191 374 log.go:172] (0xc0003cbef0) Go away received\nI0501 15:28:37.208792 374 log.go:172] (0xc0003cbef0) (0xc0009fc140) Stream removed, broadcasting: 1\nI0501 15:28:37.208823 374 log.go:172] (0xc0003cbef0) (0xc0006772c0) Stream removed, broadcasting: 3\nI0501 15:28:37.208841 374 log.go:172] (0xc0003cbef0) (0xc0009fc280) Stream removed, broadcasting: 5\n" May 1 15:28:37.219: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 1 15:28:37.219: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 1 15:28:37.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 1 15:28:37.495: INFO: stderr: "I0501 15:28:37.346119 397 log.go:172] (0xc000a8d970) (0xc00097a280) Create stream\nI0501 15:28:37.346196 397 log.go:172] (0xc000a8d970) (0xc00097a280) Stream added, broadcasting: 1\nI0501 15:28:37.348578 397 log.go:172] (0xc000a8d970) Reply frame received for 1\nI0501 15:28:37.348613 397 log.go:172] (0xc000a8d970) (0xc000a0c140) Create stream\nI0501 15:28:37.348621 397 log.go:172] (0xc000a8d970) (0xc000a0c140) Stream added, broadcasting: 3\nI0501 15:28:37.349543 397 log.go:172] (0xc000a8d970) Reply frame received for 3\nI0501 15:28:37.349575 397 log.go:172] (0xc000a8d970) (0xc00097a320) Create stream\nI0501 15:28:37.349585 397 log.go:172] (0xc000a8d970) (0xc00097a320) Stream added, broadcasting: 5\nI0501 15:28:37.350329 397 log.go:172] (0xc000a8d970) Reply frame received for 5\nI0501 15:28:37.457710 397 log.go:172] (0xc000a8d970) Data frame received for 5\nI0501 15:28:37.457751 397 log.go:172] (0xc00097a320) (5) Data frame handling\nI0501 15:28:37.457774 397 log.go:172] (0xc00097a320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 15:28:37.488741 397 log.go:172] (0xc000a8d970) Data frame received for 3\nI0501 15:28:37.488791 397 log.go:172] (0xc000a0c140) (3) Data frame handling\nI0501 15:28:37.488865 397 log.go:172] (0xc000a0c140) (3) Data frame sent\nI0501 15:28:37.489009 397 log.go:172] (0xc000a8d970) Data frame received for 3\nI0501 15:28:37.489059 397 log.go:172] (0xc000a0c140) (3) Data frame handling\nI0501 15:28:37.489304 397 log.go:172] (0xc000a8d970) Data frame received for 5\nI0501 15:28:37.489333 397 log.go:172] (0xc00097a320) (5) Data frame handling\nI0501 15:28:37.491262 397 log.go:172] (0xc000a8d970) Data frame received for 1\nI0501 15:28:37.491298 397 log.go:172] (0xc00097a280) (1) Data frame handling\nI0501 15:28:37.491333 397 log.go:172] (0xc00097a280) (1) Data frame sent\nI0501 15:28:37.491353 397 log.go:172] (0xc000a8d970) (0xc00097a280) Stream removed, broadcasting: 1\nI0501 15:28:37.491369 397 log.go:172] (0xc000a8d970) Go away received\nI0501 15:28:37.491670 397 log.go:172] (0xc000a8d970) (0xc00097a280) Stream removed, broadcasting: 1\nI0501 15:28:37.491685 397 log.go:172] (0xc000a8d970) (0xc000a0c140) Stream removed, broadcasting: 3\nI0501 15:28:37.491693 397 log.go:172] (0xc000a8d970) (0xc00097a320) Stream removed, broadcasting: 5\n" May 1 15:28:37.495: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 1 15:28:37.495: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 1 15:28:37.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 1 15:28:37.746: INFO: stderr: "I0501 15:28:37.607458 417 log.go:172] (0xc000b89810) (0xc000bf06e0) Create stream\nI0501 15:28:37.607508 417 log.go:172] (0xc000b89810) (0xc000bf06e0) Stream added, broadcasting: 1\nI0501 15:28:37.612319 417 log.go:172] (0xc000b89810) Reply frame received for 1\nI0501 15:28:37.612371 417 log.go:172] (0xc000b89810) (0xc0005055e0) Create stream\nI0501 15:28:37.612392 417 log.go:172] (0xc000b89810) (0xc0005055e0) Stream added, broadcasting: 3\nI0501 15:28:37.613768 417 log.go:172] (0xc000b89810) Reply frame received for 3\nI0501 15:28:37.613809 417 log.go:172] (0xc000b89810) (0xc0004ff540) Create stream\nI0501 15:28:37.613829 417 log.go:172] (0xc000b89810) (0xc0004ff540) Stream added, broadcasting: 5\nI0501 15:28:37.614833 417 log.go:172] (0xc000b89810) Reply frame received for 5\nI0501 15:28:37.681794 417 log.go:172] (0xc000b89810) Data frame received for 5\nI0501 15:28:37.681818 417 log.go:172] (0xc0004ff540) (5) Data frame handling\nI0501 15:28:37.681834 417 log.go:172] (0xc0004ff540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 15:28:37.738926 417 log.go:172] (0xc000b89810) Data frame received for 3\nI0501 15:28:37.738968 417 log.go:172] (0xc0005055e0) (3) Data frame handling\nI0501 15:28:37.739010 417 log.go:172] (0xc000b89810) Data frame received for 5\nI0501 15:28:37.739067 417 log.go:172] (0xc0004ff540) (5) Data frame handling\nI0501 15:28:37.739103 417 log.go:172] (0xc0005055e0) (3) Data frame sent\nI0501 15:28:37.739119 417 log.go:172] (0xc000b89810) Data frame received for 3\nI0501 15:28:37.739128 417 log.go:172] (0xc0005055e0) (3) Data frame handling\nI0501 15:28:37.740889 417 log.go:172] (0xc000b89810) Data frame received for 1\nI0501 15:28:37.740912 417 log.go:172] (0xc000bf06e0) (1) Data frame handling\nI0501 15:28:37.740925 417 log.go:172] (0xc000bf06e0) (1) Data frame sent\nI0501 15:28:37.740942 417 log.go:172] (0xc000b89810) (0xc000bf06e0) Stream removed, broadcasting: 1\nI0501 15:28:37.740970 417 log.go:172] (0xc000b89810) Go away received\nI0501 15:28:37.741512 417 log.go:172] (0xc000b89810) (0xc000bf06e0) Stream removed, broadcasting: 1\nI0501 15:28:37.741539 417 log.go:172] (0xc000b89810) (0xc0005055e0) Stream removed, broadcasting: 3\nI0501 15:28:37.741555 417 log.go:172] (0xc000b89810) (0xc0004ff540) Stream removed, broadcasting: 5\n" May 1 15:28:37.746: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 1 15:28:37.746: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 1 15:28:37.746: INFO: Waiting for statefulset status.replicas updated to 0 May 1 15:28:37.750: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 1 15:28:47.804: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 1 15:28:47.804: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 1 15:28:47.804: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 1 15:28:47.858: INFO: POD NODE PHASE GRACE CONDITIONS May 1 15:28:47.858: INFO: ss-0 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:42 +0000 UTC }] May 1 15:28:47.858: INFO: ss-1 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC }] May 1 15:28:47.858: INFO: ss-2 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC }] May 1 15:28:47.858: INFO: May 1 15:28:47.858: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 15:28:49.003: INFO: POD NODE PHASE GRACE CONDITIONS May 1 15:28:49.003: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:42 +0000 UTC }] May 1 15:28:49.003: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC }] May 1 15:28:49.003: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC }] May 1 15:28:49.003: INFO: May 1 15:28:49.003: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 15:28:50.045: INFO: POD NODE PHASE GRACE CONDITIONS May 1 15:28:50.045: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:42 +0000 UTC }] May 1 15:28:50.045: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC }] May 1 15:28:50.045: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC }] May 1 15:28:50.046: INFO: May 1 15:28:50.046: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 15:28:51.050: INFO: POD NODE PHASE GRACE CONDITIONS May 1 15:28:51.050: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:42 +0000 UTC }] May 1 15:28:51.050: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC }] May 1 15:28:51.050: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC }] May 1 15:28:51.050: INFO: May 1 15:28:51.050: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 15:28:52.075: INFO: POD NODE PHASE GRACE CONDITIONS May 1 15:28:52.075: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:42 +0000 UTC }] May 1 15:28:52.075: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC }] May 1 15:28:52.075: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC }] May 1 15:28:52.075: INFO: May 1 15:28:52.075: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 15:28:53.080: INFO: POD NODE PHASE GRACE CONDITIONS May 1 15:28:53.080: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:27:42 +0000 UTC }] May 1 15:28:53.080: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC }] May 1 15:28:53.080: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC }] May 1 15:28:53.080: INFO: May 1 15:28:53.080: INFO: StatefulSet ss has not reached scale 0, at 3 May 1 15:28:54.084: INFO: POD NODE PHASE GRACE CONDITIONS May 1 15:28:54.084: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC }] May 1 15:28:54.084: INFO: May 1 15:28:54.084: INFO: StatefulSet ss has not reached scale 0, at 1 May 1 15:28:55.089: INFO: POD NODE PHASE GRACE CONDITIONS May 1 15:28:55.089: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC }] May 1 15:28:55.089: INFO: May 1 15:28:55.089: INFO: StatefulSet ss has not reached scale 0, at 1 May 1 15:28:56.093: INFO: POD NODE PHASE GRACE CONDITIONS May 1 15:28:56.093: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC }] May 1 15:28:56.093: INFO: May 1 15:28:56.093: INFO: StatefulSet ss has not reached scale 0, at 1 May 1 15:28:57.098: INFO: POD NODE PHASE GRACE CONDITIONS May 1 15:28:57.098: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 15:28:16 +0000 UTC }] May 1 15:28:57.098: INFO: May 1 15:28:57.098: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7412 May 1 15:28:58.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:28:58.233: INFO: rc: 1 May 1 15:28:58.233: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 1 15:29:08.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:29:08.329: INFO: rc: 1 May 1 15:29:08.329: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:29:18.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:29:18.491: INFO: rc: 1 May 1 15:29:18.491: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:29:28.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:29:28.585: INFO: rc: 1 May 1 15:29:28.585: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:29:38.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:29:38.686: INFO: rc: 1 May 1 15:29:38.686: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:29:48.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:29:48.776: INFO: rc: 1 May 1 15:29:48.776: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:29:58.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:29:58.886: INFO: rc: 1 May 1 15:29:58.886: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:30:08.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:30:08.986: INFO: rc: 1 May 1 15:30:08.986: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:30:18.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:30:19.079: INFO: rc: 1 May 1 15:30:19.079: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:30:29.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:30:29.175: INFO: rc: 1 May 1 15:30:29.175: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:30:39.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:30:39.260: INFO: rc: 1 May 1 15:30:39.260: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:30:49.260: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:30:49.610: INFO: rc: 1 May 1 15:30:49.610: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:30:59.610: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:30:59.714: INFO: rc: 1 May 1 15:30:59.714: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:31:09.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:31:09.906: INFO: rc: 1 May 1 15:31:09.906: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:31:19.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:31:20.287: INFO: rc: 1 May 1 15:31:20.287: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:31:30.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:31:30.392: INFO: rc: 1 May 1 15:31:30.392: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:31:40.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:31:40.483: INFO: rc: 1 May 1 15:31:40.483: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:31:50.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:31:50.580: INFO: rc: 1 May 1 15:31:50.580: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:32:00.580: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:32:00.815: INFO: rc: 1 May 1 15:32:00.815: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:32:10.816: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:32:10.910: INFO: rc: 1 May 1 15:32:10.910: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:32:20.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:32:21.103: INFO: rc: 1 May 1 15:32:21.103: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:32:31.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:32:31.196: INFO: rc: 1 May 1 15:32:31.197: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:32:41.197: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:32:41.303: INFO: rc: 1 May 1 15:32:41.303: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:32:51.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:32:51.567: INFO: rc: 1 May 1 15:32:51.567: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:33:01.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:33:01.678: INFO: rc: 1 May 1 15:33:01.678: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:33:11.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:33:12.273: INFO: rc: 1 May 1 15:33:12.273: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:33:22.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:33:22.371: INFO: rc: 1 May 1 15:33:22.371: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:33:32.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:33:32.690: INFO: rc: 1 May 1 15:33:32.690: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:33:42.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:33:42.800: INFO: rc: 1 May 1 15:33:42.800: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:33:52.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:33:52.906: INFO: rc: 1 May 1 15:33:52.906: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 May 1 15:34:02.907: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7412 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 1 15:34:03.009: INFO: rc: 1 May 1 15:34:03.009: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: May 1 15:34:03.009: INFO: Scaling statefulset ss to 0 May 1 15:34:03.028: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 1 15:34:03.030: INFO: Deleting all statefulset in ns statefulset-7412 May 1 15:34:03.032: INFO: Scaling statefulset ss to 0 May 1 15:34:03.039: INFO: Waiting for statefulset status.replicas updated to 0 May 1 15:34:03.041: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:34:03.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7412" for this suite. • [SLOW TEST:382.673 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":44,"skipped":695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:34:03.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:34:11.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9954" for this suite. • [SLOW TEST:8.172 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":724,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:34:11.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:34:23.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5272" for this suite. • [SLOW TEST:12.205 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":46,"skipped":780,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:34:23.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-da7b343a-27b6-4779-b143-de1339a12e73 STEP: Creating a pod to test consume secrets May 1 15:34:24.122: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-882d4975-60fa-4186-ada2-aed83b2877e9" in namespace "projected-6503" to be "Succeeded or Failed" May 1 15:34:24.328: INFO: Pod "pod-projected-secrets-882d4975-60fa-4186-ada2-aed83b2877e9": Phase="Pending", Reason="", readiness=false. Elapsed: 205.921998ms May 1 15:34:26.332: INFO: Pod "pod-projected-secrets-882d4975-60fa-4186-ada2-aed83b2877e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210047305s May 1 15:34:28.336: INFO: Pod "pod-projected-secrets-882d4975-60fa-4186-ada2-aed83b2877e9": Phase="Running", Reason="", readiness=true. Elapsed: 4.213863008s May 1 15:34:30.341: INFO: Pod "pod-projected-secrets-882d4975-60fa-4186-ada2-aed83b2877e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.218338391s STEP: Saw pod success May 1 15:34:30.341: INFO: Pod "pod-projected-secrets-882d4975-60fa-4186-ada2-aed83b2877e9" satisfied condition "Succeeded or Failed" May 1 15:34:30.343: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-882d4975-60fa-4186-ada2-aed83b2877e9 container projected-secret-volume-test: STEP: delete the pod May 1 15:34:30.384: INFO: Waiting for pod pod-projected-secrets-882d4975-60fa-4186-ada2-aed83b2877e9 to disappear May 1 15:34:30.430: INFO: Pod pod-projected-secrets-882d4975-60fa-4186-ada2-aed83b2877e9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:34:30.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6503" for this suite. • [SLOW TEST:6.986 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":782,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:34:30.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-slnq STEP: Creating a pod to test atomic-volume-subpath May 1 15:34:30.563: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-slnq" in namespace "subpath-2979" to be "Succeeded or Failed" May 1 15:34:30.575: INFO: Pod "pod-subpath-test-downwardapi-slnq": Phase="Pending", Reason="", readiness=false. Elapsed: 11.39359ms May 1 15:34:32.718: INFO: Pod "pod-subpath-test-downwardapi-slnq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154267844s May 1 15:34:34.721: INFO: Pod "pod-subpath-test-downwardapi-slnq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1579947s May 1 15:34:36.725: INFO: Pod "pod-subpath-test-downwardapi-slnq": Phase="Running", Reason="", readiness=true. Elapsed: 6.161979121s May 1 15:34:38.730: INFO: Pod "pod-subpath-test-downwardapi-slnq": Phase="Running", Reason="", readiness=true. Elapsed: 8.166460629s May 1 15:34:40.734: INFO: Pod "pod-subpath-test-downwardapi-slnq": Phase="Running", Reason="", readiness=true. Elapsed: 10.170619128s May 1 15:34:42.738: INFO: Pod "pod-subpath-test-downwardapi-slnq": Phase="Running", Reason="", readiness=true. Elapsed: 12.174673562s May 1 15:34:44.742: INFO: Pod "pod-subpath-test-downwardapi-slnq": Phase="Running", Reason="", readiness=true. Elapsed: 14.178485898s May 1 15:34:46.747: INFO: Pod "pod-subpath-test-downwardapi-slnq": Phase="Running", Reason="", readiness=true. Elapsed: 16.183374888s May 1 15:34:48.751: INFO: Pod "pod-subpath-test-downwardapi-slnq": Phase="Running", Reason="", readiness=true. Elapsed: 18.187486487s May 1 15:34:51.388: INFO: Pod "pod-subpath-test-downwardapi-slnq": Phase="Running", Reason="", readiness=true. Elapsed: 20.824963829s May 1 15:34:53.392: INFO: Pod "pod-subpath-test-downwardapi-slnq": Phase="Running", Reason="", readiness=true. Elapsed: 22.828543416s May 1 15:34:55.396: INFO: Pod "pod-subpath-test-downwardapi-slnq": Phase="Running", Reason="", readiness=true. Elapsed: 24.832821272s May 1 15:34:57.404: INFO: Pod "pod-subpath-test-downwardapi-slnq": Phase="Running", Reason="", readiness=true. Elapsed: 26.840900571s May 1 15:34:59.408: INFO: Pod "pod-subpath-test-downwardapi-slnq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.844640905s STEP: Saw pod success May 1 15:34:59.408: INFO: Pod "pod-subpath-test-downwardapi-slnq" satisfied condition "Succeeded or Failed" May 1 15:34:59.411: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-downwardapi-slnq container test-container-subpath-downwardapi-slnq: STEP: delete the pod May 1 15:34:59.642: INFO: Waiting for pod pod-subpath-test-downwardapi-slnq to disappear May 1 15:34:59.665: INFO: Pod pod-subpath-test-downwardapi-slnq no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-slnq May 1 15:34:59.665: INFO: Deleting pod "pod-subpath-test-downwardapi-slnq" in namespace "subpath-2979" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:34:59.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2979" for this suite. • [SLOW TEST:29.237 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":48,"skipped":785,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:34:59.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:35:17.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7757" for this suite. • [SLOW TEST:17.578 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":49,"skipped":854,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:35:17.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-dc1eeb1d-31e9-4c30-9812-bc4d6c6f0e41 STEP: Creating configMap with name cm-test-opt-upd-9dc32237-d51b-4068-81ae-3287eb791543 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-dc1eeb1d-31e9-4c30-9812-bc4d6c6f0e41 STEP: Updating configmap cm-test-opt-upd-9dc32237-d51b-4068-81ae-3287eb791543 STEP: Creating configMap with name cm-test-opt-create-4876d002-c003-4159-8dfd-d7ab151f4a31 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:36:46.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4756" for this suite. • [SLOW TEST:89.585 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":861,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:36:46.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-qm9g STEP: Creating a pod to test atomic-volume-subpath May 1 15:36:48.248: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qm9g" in namespace "subpath-7038" to be "Succeeded or Failed" May 1 15:36:48.450: INFO: Pod "pod-subpath-test-configmap-qm9g": Phase="Pending", Reason="", readiness=false. Elapsed: 201.388299ms May 1 15:36:50.737: INFO: Pod "pod-subpath-test-configmap-qm9g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.488485094s May 1 15:36:53.144: INFO: Pod "pod-subpath-test-configmap-qm9g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.895331121s May 1 15:36:55.336: INFO: Pod "pod-subpath-test-configmap-qm9g": Phase="Pending", Reason="", readiness=false. Elapsed: 7.087319119s May 1 15:36:57.437: INFO: Pod "pod-subpath-test-configmap-qm9g": Phase="Running", Reason="", readiness=true. Elapsed: 9.188836304s May 1 15:36:59.510: INFO: Pod "pod-subpath-test-configmap-qm9g": Phase="Running", Reason="", readiness=true. Elapsed: 11.261418932s May 1 15:37:01.514: INFO: Pod "pod-subpath-test-configmap-qm9g": Phase="Running", Reason="", readiness=true. Elapsed: 13.265936655s May 1 15:37:03.564: INFO: Pod "pod-subpath-test-configmap-qm9g": Phase="Running", Reason="", readiness=true. Elapsed: 15.31511262s May 1 15:37:05.567: INFO: Pod "pod-subpath-test-configmap-qm9g": Phase="Running", Reason="", readiness=true. Elapsed: 17.318910114s May 1 15:37:07.572: INFO: Pod "pod-subpath-test-configmap-qm9g": Phase="Running", Reason="", readiness=true. Elapsed: 19.323694243s May 1 15:37:09.618: INFO: Pod "pod-subpath-test-configmap-qm9g": Phase="Running", Reason="", readiness=true. Elapsed: 21.369105844s May 1 15:37:11.621: INFO: Pod "pod-subpath-test-configmap-qm9g": Phase="Running", Reason="", readiness=true. Elapsed: 23.372762618s May 1 15:37:13.725: INFO: Pod "pod-subpath-test-configmap-qm9g": Phase="Running", Reason="", readiness=true. Elapsed: 25.476793088s May 1 15:37:15.733: INFO: Pod "pod-subpath-test-configmap-qm9g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.484605445s STEP: Saw pod success May 1 15:37:15.733: INFO: Pod "pod-subpath-test-configmap-qm9g" satisfied condition "Succeeded or Failed" May 1 15:37:15.736: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-qm9g container test-container-subpath-configmap-qm9g: STEP: delete the pod May 1 15:37:15.783: INFO: Waiting for pod pod-subpath-test-configmap-qm9g to disappear May 1 15:37:15.799: INFO: Pod pod-subpath-test-configmap-qm9g no longer exists STEP: Deleting pod pod-subpath-test-configmap-qm9g May 1 15:37:15.799: INFO: Deleting pod "pod-subpath-test-configmap-qm9g" in namespace "subpath-7038" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:37:15.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7038" for this suite. • [SLOW TEST:28.970 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":51,"skipped":861,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:37:15.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod May 1 15:37:23.277: INFO: Successfully updated pod "annotationupdate636dd945-6133-4636-a2f9-1aa6499ccd05" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:37:25.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6561" for this suite. • [SLOW TEST:9.876 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":868,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:37:25.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller May 1 15:37:25.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9454' May 1 15:37:26.493: INFO: stderr: "" May 1 15:37:26.493: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 15:37:26.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9454' May 1 15:37:26.634: INFO: stderr: "" May 1 15:37:26.634: INFO: stdout: "update-demo-nautilus-jjtqr update-demo-nautilus-nmjrb " May 1 15:37:26.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjtqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9454' May 1 15:37:26.718: INFO: stderr: "" May 1 15:37:26.718: INFO: stdout: "" May 1 15:37:26.718: INFO: update-demo-nautilus-jjtqr is created but not running May 1 15:37:31.718: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9454' May 1 15:37:32.191: INFO: stderr: "" May 1 15:37:32.191: INFO: stdout: "update-demo-nautilus-jjtqr update-demo-nautilus-nmjrb " May 1 15:37:32.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjtqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9454' May 1 15:37:32.902: INFO: stderr: "" May 1 15:37:32.902: INFO: stdout: "" May 1 15:37:32.902: INFO: update-demo-nautilus-jjtqr is created but not running May 1 15:37:37.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9454' May 1 15:37:38.004: INFO: stderr: "" May 1 15:37:38.004: INFO: stdout: "update-demo-nautilus-jjtqr update-demo-nautilus-nmjrb " May 1 15:37:38.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjtqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9454' May 1 15:37:38.093: INFO: stderr: "" May 1 15:37:38.093: INFO: stdout: "true" May 1 15:37:38.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjtqr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9454' May 1 15:37:38.177: INFO: stderr: "" May 1 15:37:38.177: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:37:38.177: INFO: validating pod update-demo-nautilus-jjtqr May 1 15:37:38.181: INFO: got data: { "image": "nautilus.jpg" } May 1 15:37:38.181: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:37:38.181: INFO: update-demo-nautilus-jjtqr is verified up and running May 1 15:37:38.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nmjrb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9454' May 1 15:37:38.263: INFO: stderr: "" May 1 15:37:38.263: INFO: stdout: "true" May 1 15:37:38.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nmjrb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9454' May 1 15:37:38.348: INFO: stderr: "" May 1 15:37:38.348: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:37:38.348: INFO: validating pod update-demo-nautilus-nmjrb May 1 15:37:38.352: INFO: got data: { "image": "nautilus.jpg" } May 1 15:37:38.352: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:37:38.352: INFO: update-demo-nautilus-nmjrb is verified up and running STEP: scaling down the replication controller May 1 15:37:38.892: INFO: scanned /root for discovery docs: May 1 15:37:38.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9454' May 1 15:37:40.320: INFO: stderr: "" May 1 15:37:40.320: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 15:37:40.321: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9454' May 1 15:37:40.566: INFO: stderr: "" May 1 15:37:40.566: INFO: stdout: "update-demo-nautilus-jjtqr update-demo-nautilus-nmjrb " STEP: Replicas for name=update-demo: expected=1 actual=2 May 1 15:37:45.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9454' May 1 15:37:45.806: INFO: stderr: "" May 1 15:37:45.807: INFO: stdout: "update-demo-nautilus-jjtqr update-demo-nautilus-nmjrb " STEP: Replicas for name=update-demo: expected=1 actual=2 May 1 15:37:50.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9454' May 1 15:37:50.911: INFO: stderr: "" May 1 15:37:50.912: INFO: stdout: "update-demo-nautilus-jjtqr update-demo-nautilus-nmjrb " STEP: Replicas for name=update-demo: expected=1 actual=2 May 1 15:37:55.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9454' May 1 15:37:56.005: INFO: stderr: "" May 1 15:37:56.005: INFO: stdout: "update-demo-nautilus-jjtqr " May 1 15:37:56.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjtqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9454' May 1 15:37:56.094: INFO: stderr: "" May 1 15:37:56.094: INFO: stdout: "true" May 1 15:37:56.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjtqr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9454' May 1 15:37:56.206: INFO: stderr: "" May 1 15:37:56.207: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:37:56.207: INFO: validating pod update-demo-nautilus-jjtqr May 1 15:37:56.211: INFO: got data: { "image": "nautilus.jpg" } May 1 15:37:56.211: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:37:56.211: INFO: update-demo-nautilus-jjtqr is verified up and running STEP: scaling up the replication controller May 1 15:37:56.213: INFO: scanned /root for discovery docs: May 1 15:37:56.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9454' May 1 15:37:57.363: INFO: stderr: "" May 1 15:37:57.363: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 15:37:57.363: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9454' May 1 15:37:57.470: INFO: stderr: "" May 1 15:37:57.470: INFO: stdout: "update-demo-nautilus-jjtqr update-demo-nautilus-wb2m7 " May 1 15:37:57.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjtqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9454' May 1 15:37:57.556: INFO: stderr: "" May 1 15:37:57.556: INFO: stdout: "true" May 1 15:37:57.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjtqr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9454' May 1 15:37:57.634: INFO: stderr: "" May 1 15:37:57.634: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:37:57.634: INFO: validating pod update-demo-nautilus-jjtqr May 1 15:37:57.637: INFO: got data: { "image": "nautilus.jpg" } May 1 15:37:57.637: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:37:57.637: INFO: update-demo-nautilus-jjtqr is verified up and running May 1 15:37:57.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wb2m7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9454' May 1 15:37:57.751: INFO: stderr: "" May 1 15:37:57.751: INFO: stdout: "" May 1 15:37:57.751: INFO: update-demo-nautilus-wb2m7 is created but not running May 1 15:38:02.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9454' May 1 15:38:03.869: INFO: stderr: "" May 1 15:38:03.869: INFO: stdout: "update-demo-nautilus-jjtqr update-demo-nautilus-wb2m7 " May 1 15:38:03.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjtqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9454' May 1 15:38:04.002: INFO: stderr: "" May 1 15:38:04.002: INFO: stdout: "true" May 1 15:38:04.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjtqr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9454' May 1 15:38:04.136: INFO: stderr: "" May 1 15:38:04.136: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:38:04.136: INFO: validating pod update-demo-nautilus-jjtqr May 1 15:38:04.140: INFO: got data: { "image": "nautilus.jpg" } May 1 15:38:04.140: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:38:04.140: INFO: update-demo-nautilus-jjtqr is verified up and running May 1 15:38:04.140: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wb2m7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9454' May 1 15:38:04.232: INFO: stderr: "" May 1 15:38:04.232: INFO: stdout: "true" May 1 15:38:04.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wb2m7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9454' May 1 15:38:04.319: INFO: stderr: "" May 1 15:38:04.319: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:38:04.319: INFO: validating pod update-demo-nautilus-wb2m7 May 1 15:38:04.322: INFO: got data: { "image": "nautilus.jpg" } May 1 15:38:04.322: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:38:04.322: INFO: update-demo-nautilus-wb2m7 is verified up and running STEP: using delete to clean up resources May 1 15:38:04.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9454' May 1 15:38:04.426: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 15:38:04.426: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 1 15:38:04.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9454' May 1 15:38:09.606: INFO: stderr: "No resources found in kubectl-9454 namespace.\n" May 1 15:38:09.606: INFO: stdout: "" May 1 15:38:09.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9454 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 15:38:09.702: INFO: stderr: "" May 1 15:38:09.702: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:38:09.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9454" for this suite. • [SLOW TEST:44.024 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":53,"skipped":906,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:38:09.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-212140b7-5018-4228-b588-5cf27a2a295b STEP: Creating a pod to test consume secrets May 1 15:38:09.809: INFO: Waiting up to 5m0s for pod "pod-secrets-db1e7d67-5f70-4f80-bd0c-955a50e61b25" in namespace "secrets-9813" to be "Succeeded or Failed" May 1 15:38:09.887: INFO: Pod "pod-secrets-db1e7d67-5f70-4f80-bd0c-955a50e61b25": Phase="Pending", Reason="", readiness=false. Elapsed: 77.942961ms May 1 15:38:11.935: INFO: Pod "pod-secrets-db1e7d67-5f70-4f80-bd0c-955a50e61b25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125675094s May 1 15:38:14.006: INFO: Pod "pod-secrets-db1e7d67-5f70-4f80-bd0c-955a50e61b25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196845697s May 1 15:38:16.010: INFO: Pod "pod-secrets-db1e7d67-5f70-4f80-bd0c-955a50e61b25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.201012978s STEP: Saw pod success May 1 15:38:16.010: INFO: Pod "pod-secrets-db1e7d67-5f70-4f80-bd0c-955a50e61b25" satisfied condition "Succeeded or Failed" May 1 15:38:16.013: INFO: Trying to get logs from node kali-worker pod pod-secrets-db1e7d67-5f70-4f80-bd0c-955a50e61b25 container secret-volume-test: STEP: delete the pod May 1 15:38:16.054: INFO: Waiting for pod pod-secrets-db1e7d67-5f70-4f80-bd0c-955a50e61b25 to disappear May 1 15:38:16.068: INFO: Pod pod-secrets-db1e7d67-5f70-4f80-bd0c-955a50e61b25 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:38:16.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9813" for this suite. • [SLOW TEST:6.384 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":54,"skipped":926,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:38:16.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 1 15:38:16.183: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16b6a4c8-9d95-4226-a938-48af727306b9" in namespace "projected-8217" to be "Succeeded or Failed" May 1 15:38:16.187: INFO: Pod "downwardapi-volume-16b6a4c8-9d95-4226-a938-48af727306b9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.832193ms May 1 15:38:18.528: INFO: Pod "downwardapi-volume-16b6a4c8-9d95-4226-a938-48af727306b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.344484738s May 1 15:38:20.532: INFO: Pod "downwardapi-volume-16b6a4c8-9d95-4226-a938-48af727306b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.348252046s May 1 15:38:22.714: INFO: Pod "downwardapi-volume-16b6a4c8-9d95-4226-a938-48af727306b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.530221616s May 1 15:38:24.827: INFO: Pod "downwardapi-volume-16b6a4c8-9d95-4226-a938-48af727306b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.643853597s STEP: Saw pod success May 1 15:38:24.827: INFO: Pod "downwardapi-volume-16b6a4c8-9d95-4226-a938-48af727306b9" satisfied condition "Succeeded or Failed" May 1 15:38:24.830: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-16b6a4c8-9d95-4226-a938-48af727306b9 container client-container: STEP: delete the pod May 1 15:38:25.055: INFO: Waiting for pod downwardapi-volume-16b6a4c8-9d95-4226-a938-48af727306b9 to disappear May 1 15:38:25.086: INFO: Pod downwardapi-volume-16b6a4c8-9d95-4226-a938-48af727306b9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:38:25.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8217" for this suite. • [SLOW TEST:8.995 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":934,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:38:25.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 15:38:26.486: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 15:38:28.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944306, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944306, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944307, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944306, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:38:30.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944306, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944306, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944307, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944306, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:38:32.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944306, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944306, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944307, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944306, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 15:38:36.253: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 1 15:38:36.274: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:38:36.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8753" for this suite. STEP: Destroying namespace "webhook-8753-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.156 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":56,"skipped":968,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:38:37.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:38:39.111: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 1 15:38:43.373: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:38:44.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9297" for this suite. • [SLOW TEST:7.640 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":57,"skipped":978,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:38:44.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 1 15:38:46.970: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23ceb258-3089-430d-b486-7981c7f86e1c" in namespace "downward-api-878" to be "Succeeded or Failed" May 1 15:38:47.319: INFO: Pod "downwardapi-volume-23ceb258-3089-430d-b486-7981c7f86e1c": Phase="Pending", Reason="", readiness=false. Elapsed: 348.561515ms May 1 15:38:49.810: INFO: Pod "downwardapi-volume-23ceb258-3089-430d-b486-7981c7f86e1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.839901994s May 1 15:38:51.813: INFO: Pod "downwardapi-volume-23ceb258-3089-430d-b486-7981c7f86e1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.842607503s May 1 15:38:53.818: INFO: Pod "downwardapi-volume-23ceb258-3089-430d-b486-7981c7f86e1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.847561786s STEP: Saw pod success May 1 15:38:53.818: INFO: Pod "downwardapi-volume-23ceb258-3089-430d-b486-7981c7f86e1c" satisfied condition "Succeeded or Failed" May 1 15:38:53.821: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-23ceb258-3089-430d-b486-7981c7f86e1c container client-container: STEP: delete the pod May 1 15:38:53.900: INFO: Waiting for pod downwardapi-volume-23ceb258-3089-430d-b486-7981c7f86e1c to disappear May 1 15:38:54.363: INFO: Pod downwardapi-volume-23ceb258-3089-430d-b486-7981c7f86e1c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:38:54.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-878" for this suite. • [SLOW TEST:9.504 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":58,"skipped":980,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:38:54.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:38:54.950: INFO: Waiting up to 5m0s for pod "busybox-user-65534-5bdc135b-ce29-421c-ad1d-37f9fd10b0c7" in namespace "security-context-test-7300" to be "Succeeded or Failed" May 1 15:38:54.979: INFO: Pod "busybox-user-65534-5bdc135b-ce29-421c-ad1d-37f9fd10b0c7": Phase="Pending", Reason="", readiness=false. Elapsed: 29.290297ms May 1 15:38:56.983: INFO: Pod "busybox-user-65534-5bdc135b-ce29-421c-ad1d-37f9fd10b0c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03291663s May 1 15:38:58.987: INFO: Pod "busybox-user-65534-5bdc135b-ce29-421c-ad1d-37f9fd10b0c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036961986s May 1 15:39:01.007: INFO: Pod "busybox-user-65534-5bdc135b-ce29-421c-ad1d-37f9fd10b0c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056943008s May 1 15:39:01.007: INFO: Pod "busybox-user-65534-5bdc135b-ce29-421c-ad1d-37f9fd10b0c7" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:39:01.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7300" for this suite. • [SLOW TEST:6.624 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":987,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:39:01.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:39:01.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9912' May 1 15:39:01.554: INFO: stderr: "" May 1 15:39:01.554: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 1 15:39:01.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9912' May 1 15:39:02.188: INFO: stderr: "" May 1 15:39:02.188: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 1 15:39:03.253: INFO: Selector matched 1 pods for map[app:agnhost] May 1 15:39:03.253: INFO: Found 0 / 1 May 1 15:39:04.194: INFO: Selector matched 1 pods for map[app:agnhost] May 1 15:39:04.194: INFO: Found 0 / 1 May 1 15:39:05.253: INFO: Selector matched 1 pods for map[app:agnhost] May 1 15:39:05.253: INFO: Found 0 / 1 May 1 15:39:06.516: INFO: Selector matched 1 pods for map[app:agnhost] May 1 15:39:06.517: INFO: Found 0 / 1 May 1 15:39:07.233: INFO: Selector matched 1 pods for map[app:agnhost] May 1 15:39:07.233: INFO: Found 1 / 1 May 1 15:39:07.233: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 1 15:39:07.236: INFO: Selector matched 1 pods for map[app:agnhost] May 1 15:39:07.236: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 1 15:39:07.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe pod agnhost-master-tmckk --namespace=kubectl-9912' May 1 15:39:07.428: INFO: stderr: "" May 1 15:39:07.428: INFO: stdout: "Name: agnhost-master-tmckk\nNamespace: kubectl-9912\nPriority: 0\nNode: kali-worker2/172.17.0.18\nStart Time: Fri, 01 May 2020 15:39:01 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.176\nIPs:\n IP: 10.244.1.176\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://65173ffda75a42a36eed5246e067adb9cbdd3c9cc105421efa588c27c585bac4\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 01 May 2020 15:39:05 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-56cmk (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-56cmk:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-56cmk\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned kubectl-9912/agnhost-master-tmckk to kali-worker2\n Normal Pulled 4s kubelet, kali-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 2s kubelet, kali-worker2 Created container agnhost-master\n Normal Started 1s kubelet, kali-worker2 Started container agnhost-master\n" May 1 15:39:07.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9912' May 1 15:39:07.530: INFO: stderr: "" May 1 15:39:07.530: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9912\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 6s replication-controller Created pod: agnhost-master-tmckk\n" May 1 15:39:07.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9912' May 1 15:39:07.626: INFO: stderr: "" May 1 15:39:07.626: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9912\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.97.210.116\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.176:6379\nSession Affinity: None\nEvents: \n" May 1 15:39:07.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe node kali-control-plane' May 1 15:39:07.751: INFO: stderr: "" May 1 15:39:07.751: INFO: stdout: "Name: kali-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kali-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:30:59 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kali-control-plane\n AcquireTime: \n RenewTime: Fri, 01 May 2020 15:39:06 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 01 May 2020 15:38:30 +0000 Wed, 29 Apr 2020 09:30:56 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 01 May 2020 15:38:30 +0000 Wed, 29 Apr 2020 09:30:56 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 01 May 2020 15:38:30 +0000 Wed, 29 Apr 2020 09:30:56 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 01 May 2020 15:38:30 +0000 Wed, 29 Apr 2020 09:31:34 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.19\n Hostname: kali-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 2146cf85bed648199604ab2e0e9ac609\n System UUID: e83c0db4-babe-44fc-9dad-b5eeae6d23fd\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-rvq2k 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 2d6h\n kube-system coredns-66bff467f8-w6zxd 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 2d6h\n kube-system etcd-kali-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d6h\n kube-system kindnet-65djz 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 2d6h\n kube-system kube-apiserver-kali-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 2d6h\n kube-system kube-controller-manager-kali-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 2d6h\n kube-system kube-proxy-pnhtq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d6h\n kube-system kube-scheduler-kali-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 2d6h\n local-path-storage local-path-provisioner-bd4bb6b75-6l9ph 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d6h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 1 15:39:07.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe namespace kubectl-9912' May 1 15:39:07.852: INFO: stderr: "" May 1 15:39:07.852: INFO: stdout: "Name: kubectl-9912\nLabels: e2e-framework=kubectl\n e2e-run=54142f2e-34e6-44e6-afee-6db2eef92fa2\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:39:07.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9912" for this suite. • [SLOW TEST:6.843 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":60,"skipped":1002,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:39:07.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 1 15:39:10.354: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 1 15:39:12.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944350, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944350, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944350, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944350, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:39:14.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944350, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944350, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944350, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944350, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 15:39:18.098: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:39:18.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:39:20.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5885" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.555 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":61,"skipped":1018,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:39:21.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium May 1 15:39:21.834: INFO: Waiting up to 5m0s for pod "pod-f341d60c-87e5-41cb-ae09-af324770494a" in namespace "emptydir-4382" to be "Succeeded or Failed" May 1 15:39:21.996: INFO: Pod "pod-f341d60c-87e5-41cb-ae09-af324770494a": Phase="Pending", Reason="", readiness=false. Elapsed: 161.745547ms May 1 15:39:24.031: INFO: Pod "pod-f341d60c-87e5-41cb-ae09-af324770494a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19733778s May 1 15:39:26.059: INFO: Pod "pod-f341d60c-87e5-41cb-ae09-af324770494a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22493802s May 1 15:39:28.270: INFO: Pod "pod-f341d60c-87e5-41cb-ae09-af324770494a": Phase="Running", Reason="", readiness=true. Elapsed: 6.436071817s May 1 15:39:30.273: INFO: Pod "pod-f341d60c-87e5-41cb-ae09-af324770494a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.439509806s STEP: Saw pod success May 1 15:39:30.274: INFO: Pod "pod-f341d60c-87e5-41cb-ae09-af324770494a" satisfied condition "Succeeded or Failed" May 1 15:39:30.275: INFO: Trying to get logs from node kali-worker pod pod-f341d60c-87e5-41cb-ae09-af324770494a container test-container: STEP: delete the pod May 1 15:39:30.314: INFO: Waiting for pod pod-f341d60c-87e5-41cb-ae09-af324770494a to disappear May 1 15:39:30.318: INFO: Pod pod-f341d60c-87e5-41cb-ae09-af324770494a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:39:30.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4382" for this suite. • [SLOW TEST:8.909 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":62,"skipped":1024,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:39:30.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 1 15:39:30.988: INFO: Waiting up to 5m0s for pod "downward-api-59881285-20f4-432a-8dc0-511fb469aedf" in namespace "downward-api-5770" to be "Succeeded or Failed" May 1 15:39:31.015: INFO: Pod "downward-api-59881285-20f4-432a-8dc0-511fb469aedf": Phase="Pending", Reason="", readiness=false. Elapsed: 27.103044ms May 1 15:39:33.044: INFO: Pod "downward-api-59881285-20f4-432a-8dc0-511fb469aedf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055366832s May 1 15:39:35.047: INFO: Pod "downward-api-59881285-20f4-432a-8dc0-511fb469aedf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058703273s May 1 15:39:37.052: INFO: Pod "downward-api-59881285-20f4-432a-8dc0-511fb469aedf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063891551s STEP: Saw pod success May 1 15:39:37.052: INFO: Pod "downward-api-59881285-20f4-432a-8dc0-511fb469aedf" satisfied condition "Succeeded or Failed" May 1 15:39:37.055: INFO: Trying to get logs from node kali-worker2 pod downward-api-59881285-20f4-432a-8dc0-511fb469aedf container dapi-container: STEP: delete the pod May 1 15:39:37.455: INFO: Waiting for pod downward-api-59881285-20f4-432a-8dc0-511fb469aedf to disappear May 1 15:39:37.480: INFO: Pod downward-api-59881285-20f4-432a-8dc0-511fb469aedf no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:39:37.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5770" for this suite. • [SLOW TEST:7.272 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1085,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:39:37.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 15:39:38.702: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 15:39:40.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944378, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944378, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944378, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944378, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:39:43.002: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944378, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944378, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944378, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944378, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:39:44.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944378, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944378, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944378, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944378, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 15:39:48.026: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:39:48.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:39:49.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8709" for this suite. STEP: Destroying namespace "webhook-8709-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.523 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":64,"skipped":1090,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:39:50.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 15:39:51.313: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 15:39:53.706: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944391, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944391, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944391, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944391, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:39:55.913: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944391, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944391, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944391, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944391, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:39:58.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944391, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944391, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944391, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944391, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 15:40:01.222: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:40:01.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-45" for this suite. STEP: Destroying namespace "webhook-45-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.336 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":65,"skipped":1093,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:40:02.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-1e27c288-269a-4168-9b77-97c1c03d6c71 STEP: Creating a pod to test consume configMaps May 1 15:40:02.806: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b98adf6-aed6-4a71-91ef-32b94ff16512" in namespace "configmap-6680" to be "Succeeded or Failed" May 1 15:40:02.839: INFO: Pod "pod-configmaps-9b98adf6-aed6-4a71-91ef-32b94ff16512": Phase="Pending", Reason="", readiness=false. Elapsed: 32.962013ms May 1 15:40:04.843: INFO: Pod "pod-configmaps-9b98adf6-aed6-4a71-91ef-32b94ff16512": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036666915s May 1 15:40:06.846: INFO: Pod "pod-configmaps-9b98adf6-aed6-4a71-91ef-32b94ff16512": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040376775s May 1 15:40:08.876: INFO: Pod "pod-configmaps-9b98adf6-aed6-4a71-91ef-32b94ff16512": Phase="Running", Reason="", readiness=true. Elapsed: 6.069670778s May 1 15:40:10.879: INFO: Pod "pod-configmaps-9b98adf6-aed6-4a71-91ef-32b94ff16512": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07319366s STEP: Saw pod success May 1 15:40:10.879: INFO: Pod "pod-configmaps-9b98adf6-aed6-4a71-91ef-32b94ff16512" satisfied condition "Succeeded or Failed" May 1 15:40:10.883: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-9b98adf6-aed6-4a71-91ef-32b94ff16512 container configmap-volume-test: STEP: delete the pod May 1 15:40:11.386: INFO: Waiting for pod pod-configmaps-9b98adf6-aed6-4a71-91ef-32b94ff16512 to disappear May 1 15:40:11.422: INFO: Pod pod-configmaps-9b98adf6-aed6-4a71-91ef-32b94ff16512 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:40:11.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6680" for this suite. • [SLOW TEST:8.973 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":1096,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:40:11.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:40:11.754: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 1 15:40:13.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1246 create -f -' May 1 15:40:20.822: INFO: stderr: "" May 1 15:40:20.822: INFO: stdout: "e2e-test-crd-publish-openapi-2388-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 1 15:40:20.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1246 delete e2e-test-crd-publish-openapi-2388-crds test-cr' May 1 15:40:20.981: INFO: stderr: "" May 1 15:40:20.981: INFO: stdout: "e2e-test-crd-publish-openapi-2388-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 1 15:40:20.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1246 apply -f -' May 1 15:40:21.264: INFO: stderr: "" May 1 15:40:21.264: INFO: stdout: "e2e-test-crd-publish-openapi-2388-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 1 15:40:21.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1246 delete e2e-test-crd-publish-openapi-2388-crds test-cr' May 1 15:40:21.396: INFO: stderr: "" May 1 15:40:21.396: INFO: stdout: "e2e-test-crd-publish-openapi-2388-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 1 15:40:21.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2388-crds' May 1 15:40:21.656: INFO: stderr: "" May 1 15:40:21.656: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2388-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:40:24.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1246" for this suite. • [SLOW TEST:13.190 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":67,"skipped":1137,"failed":0} S ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:40:24.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-f6561834-20ea-4639-b856-42ad22bb235a STEP: Creating secret with name s-test-opt-upd-8263fa62-8553-421c-b158-52437c4ade63 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f6561834-20ea-4639-b856-42ad22bb235a STEP: Updating secret s-test-opt-upd-8263fa62-8553-421c-b158-52437c4ade63 STEP: Creating secret with name s-test-opt-create-861804af-9b24-452a-943d-a6e90853fb0b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:41:37.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-377" for this suite. • [SLOW TEST:72.400 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1138,"failed":0} SSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:41:37.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 1 15:41:37.730: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. May 1 15:41:38.811: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 1 15:41:41.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944499, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944499, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944499, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944498, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:41:43.967: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944499, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944499, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944499, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944498, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:41:46.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944499, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944499, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944499, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944498, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:41:47.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944499, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944499, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944499, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723944498, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:41:51.394: INFO: Waited 1.312002397s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:41:53.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-8451" for this suite. • [SLOW TEST:16.634 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":69,"skipped":1141,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:41:53.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-724c9f65-d18a-4690-b143-bee17a8bd66f STEP: Creating a pod to test consume configMaps May 1 15:41:55.104: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a825ea6-605f-4b4e-88a0-dee2d3ef3939" in namespace "configmap-5954" to be "Succeeded or Failed" May 1 15:41:55.168: INFO: Pod "pod-configmaps-4a825ea6-605f-4b4e-88a0-dee2d3ef3939": Phase="Pending", Reason="", readiness=false. Elapsed: 63.512879ms May 1 15:41:57.172: INFO: Pod "pod-configmaps-4a825ea6-605f-4b4e-88a0-dee2d3ef3939": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06744767s May 1 15:41:59.422: INFO: Pod "pod-configmaps-4a825ea6-605f-4b4e-88a0-dee2d3ef3939": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317447391s May 1 15:42:01.861: INFO: Pod "pod-configmaps-4a825ea6-605f-4b4e-88a0-dee2d3ef3939": Phase="Pending", Reason="", readiness=false. Elapsed: 6.756709092s May 1 15:42:03.868: INFO: Pod "pod-configmaps-4a825ea6-605f-4b4e-88a0-dee2d3ef3939": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.763254613s STEP: Saw pod success May 1 15:42:03.868: INFO: Pod "pod-configmaps-4a825ea6-605f-4b4e-88a0-dee2d3ef3939" satisfied condition "Succeeded or Failed" May 1 15:42:03.871: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-4a825ea6-605f-4b4e-88a0-dee2d3ef3939 container configmap-volume-test: STEP: delete the pod May 1 15:42:04.036: INFO: Waiting for pod pod-configmaps-4a825ea6-605f-4b4e-88a0-dee2d3ef3939 to disappear May 1 15:42:04.046: INFO: Pod pod-configmaps-4a825ea6-605f-4b4e-88a0-dee2d3ef3939 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:42:04.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5954" for this suite. • [SLOW TEST:10.398 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1154,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:42:04.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-df568cd6-29e2-40ca-b398-2b7de6cb6946 STEP: Creating a pod to test consume secrets May 1 15:42:04.440: INFO: Waiting up to 5m0s for pod "pod-secrets-85887906-4b77-4f8f-b8ad-ee7ec7038ce1" in namespace "secrets-8939" to be "Succeeded or Failed" May 1 15:42:04.522: INFO: Pod "pod-secrets-85887906-4b77-4f8f-b8ad-ee7ec7038ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 81.922262ms May 1 15:42:06.526: INFO: Pod "pod-secrets-85887906-4b77-4f8f-b8ad-ee7ec7038ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086021745s May 1 15:42:08.529: INFO: Pod "pod-secrets-85887906-4b77-4f8f-b8ad-ee7ec7038ce1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089192413s STEP: Saw pod success May 1 15:42:08.529: INFO: Pod "pod-secrets-85887906-4b77-4f8f-b8ad-ee7ec7038ce1" satisfied condition "Succeeded or Failed" May 1 15:42:08.532: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-85887906-4b77-4f8f-b8ad-ee7ec7038ce1 container secret-volume-test: STEP: delete the pod May 1 15:42:08.590: INFO: Waiting for pod pod-secrets-85887906-4b77-4f8f-b8ad-ee7ec7038ce1 to disappear May 1 15:42:08.610: INFO: Pod pod-secrets-85887906-4b77-4f8f-b8ad-ee7ec7038ce1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:42:08.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8939" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1156,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:42:08.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD May 1 15:42:09.008: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:42:25.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4273" for this suite. • [SLOW TEST:18.250 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":72,"skipped":1157,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:42:26.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:42:38.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-815" for this suite. • [SLOW TEST:12.033 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1164,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:42:38.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-489ee9a5-acee-4480-9026-113fad7b036c in namespace container-probe-2184 May 1 15:42:46.674: INFO: Started pod busybox-489ee9a5-acee-4480-9026-113fad7b036c in namespace container-probe-2184 STEP: checking the pod's current state and verifying that restartCount is present May 1 15:42:46.676: INFO: Initial restart count of pod busybox-489ee9a5-acee-4480-9026-113fad7b036c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:46:47.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2184" for this suite. • [SLOW TEST:248.804 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:46:47.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod May 1 15:46:52.479: INFO: Successfully updated pod "annotationupdatebedcad00-5f70-4083-9768-58ead85a3ec2" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:46:56.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3426" for this suite. • [SLOW TEST:8.875 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:46:56.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium May 1 15:46:59.066: INFO: Waiting up to 5m0s for pod "pod-18a35858-b244-4750-8718-5e69106be33f" in namespace "emptydir-9694" to be "Succeeded or Failed" May 1 15:46:59.076: INFO: Pod "pod-18a35858-b244-4750-8718-5e69106be33f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.952942ms May 1 15:47:01.155: INFO: Pod "pod-18a35858-b244-4750-8718-5e69106be33f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089453228s May 1 15:47:03.389: INFO: Pod "pod-18a35858-b244-4750-8718-5e69106be33f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323503877s May 1 15:47:05.394: INFO: Pod "pod-18a35858-b244-4750-8718-5e69106be33f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.328107458s STEP: Saw pod success May 1 15:47:05.394: INFO: Pod "pod-18a35858-b244-4750-8718-5e69106be33f" satisfied condition "Succeeded or Failed" May 1 15:47:05.396: INFO: Trying to get logs from node kali-worker pod pod-18a35858-b244-4750-8718-5e69106be33f container test-container: STEP: delete the pod May 1 15:47:05.480: INFO: Waiting for pod pod-18a35858-b244-4750-8718-5e69106be33f to disappear May 1 15:47:05.610: INFO: Pod pod-18a35858-b244-4750-8718-5e69106be33f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:47:05.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9694" for this suite. • [SLOW TEST:9.036 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1261,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:47:05.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-41acfd2a-5efd-4ce1-93df-f1345e1f8b1d STEP: Creating a pod to test consume configMaps May 1 15:47:06.325: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-545db2c2-490b-4ce8-be51-c8796cf406bd" in namespace "projected-3530" to be "Succeeded or Failed" May 1 15:47:06.437: INFO: Pod "pod-projected-configmaps-545db2c2-490b-4ce8-be51-c8796cf406bd": Phase="Pending", Reason="", readiness=false. Elapsed: 112.657706ms May 1 15:47:08.508: INFO: Pod "pod-projected-configmaps-545db2c2-490b-4ce8-be51-c8796cf406bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183786479s May 1 15:47:10.544: INFO: Pod "pod-projected-configmaps-545db2c2-490b-4ce8-be51-c8796cf406bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.219230154s May 1 15:47:12.550: INFO: Pod "pod-projected-configmaps-545db2c2-490b-4ce8-be51-c8796cf406bd": Phase="Running", Reason="", readiness=true. Elapsed: 6.225876479s May 1 15:47:14.555: INFO: Pod "pod-projected-configmaps-545db2c2-490b-4ce8-be51-c8796cf406bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.230405967s STEP: Saw pod success May 1 15:47:14.555: INFO: Pod "pod-projected-configmaps-545db2c2-490b-4ce8-be51-c8796cf406bd" satisfied condition "Succeeded or Failed" May 1 15:47:14.558: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-545db2c2-490b-4ce8-be51-c8796cf406bd container projected-configmap-volume-test: STEP: delete the pod May 1 15:47:14.607: INFO: Waiting for pod pod-projected-configmaps-545db2c2-490b-4ce8-be51-c8796cf406bd to disappear May 1 15:47:14.622: INFO: Pod pod-projected-configmaps-545db2c2-490b-4ce8-be51-c8796cf406bd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:47:14.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3530" for this suite. • [SLOW TEST:9.011 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1312,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:47:14.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-f6541afe-9130-4b1f-9ce0-3c2c394b0e75 STEP: Creating a pod to test consume secrets May 1 15:47:14.876: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8d19916d-79fb-444c-a65c-bb88c513fdd2" in namespace "projected-2956" to be "Succeeded or Failed" May 1 15:47:14.899: INFO: Pod "pod-projected-secrets-8d19916d-79fb-444c-a65c-bb88c513fdd2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.225929ms May 1 15:47:16.903: INFO: Pod "pod-projected-secrets-8d19916d-79fb-444c-a65c-bb88c513fdd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027170595s May 1 15:47:18.908: INFO: Pod "pod-projected-secrets-8d19916d-79fb-444c-a65c-bb88c513fdd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03184375s May 1 15:47:20.994: INFO: Pod "pod-projected-secrets-8d19916d-79fb-444c-a65c-bb88c513fdd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117580523s STEP: Saw pod success May 1 15:47:20.994: INFO: Pod "pod-projected-secrets-8d19916d-79fb-444c-a65c-bb88c513fdd2" satisfied condition "Succeeded or Failed" May 1 15:47:20.996: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-8d19916d-79fb-444c-a65c-bb88c513fdd2 container projected-secret-volume-test: STEP: delete the pod May 1 15:47:21.163: INFO: Waiting for pod pod-projected-secrets-8d19916d-79fb-444c-a65c-bb88c513fdd2 to disappear May 1 15:47:21.196: INFO: Pod pod-projected-secrets-8d19916d-79fb-444c-a65c-bb88c513fdd2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:47:21.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2956" for this suite. • [SLOW TEST:6.730 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":78,"skipped":1325,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:47:21.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:47:21.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-4251" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":79,"skipped":1411,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:47:21.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:47:52.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5017" for this suite. • [SLOW TEST:30.531 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":80,"skipped":1430,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:47:52.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs May 1 15:47:53.638: INFO: Waiting up to 5m0s for pod "pod-1b084ae5-1fd5-4dd5-8253-5f398262c991" in namespace "emptydir-9113" to be "Succeeded or Failed" May 1 15:47:53.714: INFO: Pod "pod-1b084ae5-1fd5-4dd5-8253-5f398262c991": Phase="Pending", Reason="", readiness=false. Elapsed: 75.046772ms May 1 15:47:55.941: INFO: Pod "pod-1b084ae5-1fd5-4dd5-8253-5f398262c991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302784345s May 1 15:47:58.030: INFO: Pod "pod-1b084ae5-1fd5-4dd5-8253-5f398262c991": Phase="Running", Reason="", readiness=true. Elapsed: 4.39175441s May 1 15:48:00.034: INFO: Pod "pod-1b084ae5-1fd5-4dd5-8253-5f398262c991": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.395040016s STEP: Saw pod success May 1 15:48:00.034: INFO: Pod "pod-1b084ae5-1fd5-4dd5-8253-5f398262c991" satisfied condition "Succeeded or Failed" May 1 15:48:00.036: INFO: Trying to get logs from node kali-worker2 pod pod-1b084ae5-1fd5-4dd5-8253-5f398262c991 container test-container: STEP: delete the pod May 1 15:48:00.159: INFO: Waiting for pod pod-1b084ae5-1fd5-4dd5-8253-5f398262c991 to disappear May 1 15:48:00.203: INFO: Pod pod-1b084ae5-1fd5-4dd5-8253-5f398262c991 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:48:00.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9113" for this suite. • [SLOW TEST:7.905 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1441,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:48:00.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium May 1 15:48:00.450: INFO: Waiting up to 5m0s for pod "pod-79ce3786-d10d-42ad-a8af-9d5c4ff93cb2" in namespace "emptydir-3044" to be "Succeeded or Failed" May 1 15:48:00.480: INFO: Pod "pod-79ce3786-d10d-42ad-a8af-9d5c4ff93cb2": Phase="Pending", Reason="", readiness=false. Elapsed: 30.310471ms May 1 15:48:02.509: INFO: Pod "pod-79ce3786-d10d-42ad-a8af-9d5c4ff93cb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059454576s May 1 15:48:04.513: INFO: Pod "pod-79ce3786-d10d-42ad-a8af-9d5c4ff93cb2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063713432s May 1 15:48:06.527: INFO: Pod "pod-79ce3786-d10d-42ad-a8af-9d5c4ff93cb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077326285s STEP: Saw pod success May 1 15:48:06.527: INFO: Pod "pod-79ce3786-d10d-42ad-a8af-9d5c4ff93cb2" satisfied condition "Succeeded or Failed" May 1 15:48:06.530: INFO: Trying to get logs from node kali-worker2 pod pod-79ce3786-d10d-42ad-a8af-9d5c4ff93cb2 container test-container: STEP: delete the pod May 1 15:48:06.786: INFO: Waiting for pod pod-79ce3786-d10d-42ad-a8af-9d5c4ff93cb2 to disappear May 1 15:48:06.946: INFO: Pod pod-79ce3786-d10d-42ad-a8af-9d5c4ff93cb2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:48:06.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3044" for this suite. • [SLOW TEST:6.890 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1546,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:48:07.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 May 1 15:48:07.324: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 1 15:48:07.766: INFO: Waiting for terminating namespaces to be deleted... May 1 15:48:07.769: INFO: Logging pods the kubelet thinks is on node kali-worker before test May 1 15:48:07.774: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 1 15:48:07.774: INFO: Container kindnet-cni ready: true, restart count 1 May 1 15:48:07.774: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 1 15:48:07.774: INFO: Container kube-proxy ready: true, restart count 0 May 1 15:48:07.774: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test May 1 15:48:07.778: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 1 15:48:07.778: INFO: Container kube-proxy ready: true, restart count 0 May 1 15:48:07.778: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 1 15:48:07.778: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node kali-worker STEP: verifying the node has the label node kali-worker2 May 1 15:48:08.335: INFO: Pod kindnet-f8plf requesting resource cpu=100m on Node kali-worker May 1 15:48:08.335: INFO: Pod kindnet-mcdh2 requesting resource cpu=100m on Node kali-worker2 May 1 15:48:08.335: INFO: Pod kube-proxy-mmnb6 requesting resource cpu=0m on Node kali-worker2 May 1 15:48:08.335: INFO: Pod kube-proxy-vrswj requesting resource cpu=0m on Node kali-worker STEP: Starting Pods to consume most of the cluster CPU. May 1 15:48:08.335: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker May 1 15:48:08.341: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-ade1692b-a74f-444d-9465-5ef7429cb48b.160af22bcb47a218], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4369/filler-pod-ade1692b-a74f-444d-9465-5ef7429cb48b to kali-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-ade1692b-a74f-444d-9465-5ef7429cb48b.160af22c50e95c5a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ade1692b-a74f-444d-9465-5ef7429cb48b.160af22d1fd83a11], Reason = [Created], Message = [Created container filler-pod-ade1692b-a74f-444d-9465-5ef7429cb48b] STEP: Considering event: Type = [Normal], Name = [filler-pod-ade1692b-a74f-444d-9465-5ef7429cb48b.160af22d3923004c], Reason = [Started], Message = [Started container filler-pod-ade1692b-a74f-444d-9465-5ef7429cb48b] STEP: Considering event: Type = [Normal], Name = [filler-pod-cd774fa8-c308-41a0-8262-8a41b68261a4.160af22bcf073ac1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4369/filler-pod-cd774fa8-c308-41a0-8262-8a41b68261a4 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-cd774fa8-c308-41a0-8262-8a41b68261a4.160af22c368eea6a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-cd774fa8-c308-41a0-8262-8a41b68261a4.160af22cabe91bb5], Reason = [Created], Message = [Created container filler-pod-cd774fa8-c308-41a0-8262-8a41b68261a4] STEP: Considering event: Type = [Normal], Name = [filler-pod-cd774fa8-c308-41a0-8262-8a41b68261a4.160af22cf54e53e3], Reason = [Started], Message = [Started container filler-pod-cd774fa8-c308-41a0-8262-8a41b68261a4] STEP: Considering event: Type = [Warning], Name = [additional-pod.160af22dae2ed3fe], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node kali-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node kali-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:48:17.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4369" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:10.616 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":83,"skipped":1553,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:48:17.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-3cae9e93-6a70-45ab-a2eb-24ba8c5d5138 STEP: Creating a pod to test consume secrets May 1 15:48:17.833: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3d24453e-200d-431f-9be1-51451a31230e" in namespace "projected-2674" to be "Succeeded or Failed" May 1 15:48:17.866: INFO: Pod "pod-projected-secrets-3d24453e-200d-431f-9be1-51451a31230e": Phase="Pending", Reason="", readiness=false. Elapsed: 32.550285ms May 1 15:48:19.869: INFO: Pod "pod-projected-secrets-3d24453e-200d-431f-9be1-51451a31230e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035983548s May 1 15:48:21.874: INFO: Pod "pod-projected-secrets-3d24453e-200d-431f-9be1-51451a31230e": Phase="Running", Reason="", readiness=true. Elapsed: 4.040641902s May 1 15:48:24.279: INFO: Pod "pod-projected-secrets-3d24453e-200d-431f-9be1-51451a31230e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.446115346s STEP: Saw pod success May 1 15:48:24.279: INFO: Pod "pod-projected-secrets-3d24453e-200d-431f-9be1-51451a31230e" satisfied condition "Succeeded or Failed" May 1 15:48:24.407: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-3d24453e-200d-431f-9be1-51451a31230e container projected-secret-volume-test: STEP: delete the pod May 1 15:48:24.696: INFO: Waiting for pod pod-projected-secrets-3d24453e-200d-431f-9be1-51451a31230e to disappear May 1 15:48:24.798: INFO: Pod pod-projected-secrets-3d24453e-200d-431f-9be1-51451a31230e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:48:24.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2674" for this suite. • [SLOW TEST:7.642 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":84,"skipped":1554,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:48:25.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 1 15:48:26.545: INFO: >>> kubeConfig: /root/.kube/config May 1 15:48:29.894: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:48:41.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6030" for this suite. • [SLOW TEST:16.548 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":85,"skipped":1568,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:48:41.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 1 15:48:48.991: INFO: Successfully updated pod "adopt-release-472b5" STEP: Checking that the Job readopts the Pod May 1 15:48:48.991: INFO: Waiting up to 15m0s for pod "adopt-release-472b5" in namespace "job-2234" to be "adopted" May 1 15:48:49.002: INFO: Pod "adopt-release-472b5": Phase="Running", Reason="", readiness=true. Elapsed: 11.310474ms May 1 15:48:51.007: INFO: Pod "adopt-release-472b5": Phase="Running", Reason="", readiness=true. Elapsed: 2.015913347s May 1 15:48:51.007: INFO: Pod "adopt-release-472b5" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 1 15:48:51.518: INFO: Successfully updated pod "adopt-release-472b5" STEP: Checking that the Job releases the Pod May 1 15:48:51.518: INFO: Waiting up to 15m0s for pod "adopt-release-472b5" in namespace "job-2234" to be "released" May 1 15:48:51.528: INFO: Pod "adopt-release-472b5": Phase="Running", Reason="", readiness=true. Elapsed: 9.850304ms May 1 15:48:53.532: INFO: Pod "adopt-release-472b5": Phase="Running", Reason="", readiness=true. Elapsed: 2.013586272s May 1 15:48:53.532: INFO: Pod "adopt-release-472b5" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:48:53.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2234" for this suite. • [SLOW TEST:11.605 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":86,"skipped":1590,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:48:53.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs May 1 15:48:55.071: INFO: Waiting up to 5m0s for pod "pod-5ebb0ce6-2ee3-4ad0-a0be-d1a49e65b847" in namespace "emptydir-1506" to be "Succeeded or Failed" May 1 15:48:55.341: INFO: Pod "pod-5ebb0ce6-2ee3-4ad0-a0be-d1a49e65b847": Phase="Pending", Reason="", readiness=false. Elapsed: 270.190764ms May 1 15:48:57.367: INFO: Pod "pod-5ebb0ce6-2ee3-4ad0-a0be-d1a49e65b847": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296448972s May 1 15:48:59.382: INFO: Pod "pod-5ebb0ce6-2ee3-4ad0-a0be-d1a49e65b847": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.310845086s STEP: Saw pod success May 1 15:48:59.382: INFO: Pod "pod-5ebb0ce6-2ee3-4ad0-a0be-d1a49e65b847" satisfied condition "Succeeded or Failed" May 1 15:48:59.384: INFO: Trying to get logs from node kali-worker pod pod-5ebb0ce6-2ee3-4ad0-a0be-d1a49e65b847 container test-container: STEP: delete the pod May 1 15:49:00.015: INFO: Waiting for pod pod-5ebb0ce6-2ee3-4ad0-a0be-d1a49e65b847 to disappear May 1 15:49:00.031: INFO: Pod pod-5ebb0ce6-2ee3-4ad0-a0be-d1a49e65b847 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:49:00.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1506" for this suite. • [SLOW TEST:6.550 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1591,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:49:00.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs May 1 15:49:00.266: INFO: Waiting up to 5m0s for pod "pod-e558f4d1-5d6f-46b6-b115-c9643e345ed4" in namespace "emptydir-7115" to be "Succeeded or Failed" May 1 15:49:00.331: INFO: Pod "pod-e558f4d1-5d6f-46b6-b115-c9643e345ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 64.482569ms May 1 15:49:02.402: INFO: Pod "pod-e558f4d1-5d6f-46b6-b115-c9643e345ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135126049s May 1 15:49:04.406: INFO: Pod "pod-e558f4d1-5d6f-46b6-b115-c9643e345ed4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.139332308s STEP: Saw pod success May 1 15:49:04.406: INFO: Pod "pod-e558f4d1-5d6f-46b6-b115-c9643e345ed4" satisfied condition "Succeeded or Failed" May 1 15:49:04.410: INFO: Trying to get logs from node kali-worker pod pod-e558f4d1-5d6f-46b6-b115-c9643e345ed4 container test-container: STEP: delete the pod May 1 15:49:04.506: INFO: Waiting for pod pod-e558f4d1-5d6f-46b6-b115-c9643e345ed4 to disappear May 1 15:49:04.530: INFO: Pod pod-e558f4d1-5d6f-46b6-b115-c9643e345ed4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:49:04.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7115" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1601,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:49:04.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 1 15:49:04.987: INFO: Waiting up to 5m0s for pod "downward-api-533462cd-3630-4ecd-85f1-3bd40a55bc1c" in namespace "downward-api-9977" to be "Succeeded or Failed" May 1 15:49:05.009: INFO: Pod "downward-api-533462cd-3630-4ecd-85f1-3bd40a55bc1c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.39652ms May 1 15:49:07.014: INFO: Pod "downward-api-533462cd-3630-4ecd-85f1-3bd40a55bc1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026952645s May 1 15:49:09.018: INFO: Pod "downward-api-533462cd-3630-4ecd-85f1-3bd40a55bc1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031534021s STEP: Saw pod success May 1 15:49:09.018: INFO: Pod "downward-api-533462cd-3630-4ecd-85f1-3bd40a55bc1c" satisfied condition "Succeeded or Failed" May 1 15:49:09.021: INFO: Trying to get logs from node kali-worker pod downward-api-533462cd-3630-4ecd-85f1-3bd40a55bc1c container dapi-container: STEP: delete the pod May 1 15:49:09.044: INFO: Waiting for pod downward-api-533462cd-3630-4ecd-85f1-3bd40a55bc1c to disappear May 1 15:49:09.048: INFO: Pod downward-api-533462cd-3630-4ecd-85f1-3bd40a55bc1c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:49:09.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9977" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1604,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:49:09.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-6637 STEP: creating a selector STEP: Creating the service pods in kubernetes May 1 15:49:09.147: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 1 15:49:09.181: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 1 15:49:11.185: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 1 15:49:13.186: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 1 15:49:15.186: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 15:49:17.186: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 15:49:19.198: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 15:49:21.184: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 15:49:23.198: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 15:49:25.185: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 15:49:27.246: INFO: The status of Pod netserver-0 is Running (Ready = true) May 1 15:49:27.268: INFO: The status of Pod netserver-1 is Running (Ready = false) May 1 15:49:29.310: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 1 15:49:37.812: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.235:8080/dial?request=hostname&protocol=udp&host=10.244.2.234&port=8081&tries=1'] Namespace:pod-network-test-6637 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 15:49:37.812: INFO: >>> kubeConfig: /root/.kube/config I0501 15:49:37.836069 7 log.go:172] (0xc005fe4580) (0xc001b560a0) Create stream I0501 15:49:37.836102 7 log.go:172] (0xc005fe4580) (0xc001b560a0) Stream added, broadcasting: 1 I0501 15:49:37.837585 7 log.go:172] (0xc005fe4580) Reply frame received for 1 I0501 15:49:37.837614 7 log.go:172] (0xc005fe4580) (0xc001874000) Create stream I0501 15:49:37.837624 7 log.go:172] (0xc005fe4580) (0xc001874000) Stream added, broadcasting: 3 I0501 15:49:37.838172 7 log.go:172] (0xc005fe4580) Reply frame received for 3 I0501 15:49:37.838194 7 log.go:172] (0xc005fe4580) (0xc001b56320) Create stream I0501 15:49:37.838202 7 log.go:172] (0xc005fe4580) (0xc001b56320) Stream added, broadcasting: 5 I0501 15:49:37.838750 7 log.go:172] (0xc005fe4580) Reply frame received for 5 I0501 15:49:37.922449 7 log.go:172] (0xc005fe4580) Data frame received for 3 I0501 15:49:37.922486 7 log.go:172] (0xc001874000) (3) Data frame handling I0501 15:49:37.922510 7 log.go:172] (0xc001874000) (3) Data frame sent I0501 15:49:37.922906 7 log.go:172] (0xc005fe4580) Data frame received for 5 I0501 15:49:37.922966 7 log.go:172] (0xc001b56320) (5) Data frame handling I0501 15:49:37.923076 7 log.go:172] (0xc005fe4580) Data frame received for 3 I0501 15:49:37.923113 7 log.go:172] (0xc001874000) (3) Data frame handling I0501 15:49:37.924901 7 log.go:172] (0xc005fe4580) Data frame received for 1 I0501 15:49:37.924914 7 log.go:172] (0xc001b560a0) (1) Data frame handling I0501 15:49:37.924922 7 log.go:172] (0xc001b560a0) (1) Data frame sent I0501 15:49:37.925411 7 log.go:172] (0xc005fe4580) (0xc001b560a0) Stream removed, broadcasting: 1 I0501 15:49:37.925537 7 log.go:172] (0xc005fe4580) Go away received I0501 15:49:37.925771 7 log.go:172] (0xc005fe4580) (0xc001b560a0) Stream removed, broadcasting: 1 I0501 15:49:37.925795 7 log.go:172] (0xc005fe4580) (0xc001874000) Stream removed, broadcasting: 3 I0501 15:49:37.925815 7 log.go:172] (0xc005fe4580) (0xc001b56320) Stream removed, broadcasting: 5 May 1 15:49:37.925: INFO: Waiting for responses: map[] May 1 15:49:37.928: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.235:8080/dial?request=hostname&protocol=udp&host=10.244.1.196&port=8081&tries=1'] Namespace:pod-network-test-6637 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 15:49:37.928: INFO: >>> kubeConfig: /root/.kube/config I0501 15:49:37.952967 7 log.go:172] (0xc005dc69a0) (0xc0018c0000) Create stream I0501 15:49:37.952997 7 log.go:172] (0xc005dc69a0) (0xc0018c0000) Stream added, broadcasting: 1 I0501 15:49:37.954891 7 log.go:172] (0xc005dc69a0) Reply frame received for 1 I0501 15:49:37.954926 7 log.go:172] (0xc005dc69a0) (0xc0018c0140) Create stream I0501 15:49:37.954940 7 log.go:172] (0xc005dc69a0) (0xc0018c0140) Stream added, broadcasting: 3 I0501 15:49:37.955741 7 log.go:172] (0xc005dc69a0) Reply frame received for 3 I0501 15:49:37.955775 7 log.go:172] (0xc005dc69a0) (0xc0018740a0) Create stream I0501 15:49:37.955786 7 log.go:172] (0xc005dc69a0) (0xc0018740a0) Stream added, broadcasting: 5 I0501 15:49:37.956583 7 log.go:172] (0xc005dc69a0) Reply frame received for 5 I0501 15:49:38.019671 7 log.go:172] (0xc005dc69a0) Data frame received for 3 I0501 15:49:38.019698 7 log.go:172] (0xc0018c0140) (3) Data frame handling I0501 15:49:38.019716 7 log.go:172] (0xc0018c0140) (3) Data frame sent I0501 15:49:38.020446 7 log.go:172] (0xc005dc69a0) Data frame received for 5 I0501 15:49:38.020478 7 log.go:172] (0xc0018740a0) (5) Data frame handling I0501 15:49:38.020748 7 log.go:172] (0xc005dc69a0) Data frame received for 3 I0501 15:49:38.020772 7 log.go:172] (0xc0018c0140) (3) Data frame handling I0501 15:49:38.022463 7 log.go:172] (0xc005dc69a0) Data frame received for 1 I0501 15:49:38.022493 7 log.go:172] (0xc0018c0000) (1) Data frame handling I0501 15:49:38.022514 7 log.go:172] (0xc0018c0000) (1) Data frame sent I0501 15:49:38.022533 7 log.go:172] (0xc005dc69a0) (0xc0018c0000) Stream removed, broadcasting: 1 I0501 15:49:38.022564 7 log.go:172] (0xc005dc69a0) Go away received I0501 15:49:38.022805 7 log.go:172] (0xc005dc69a0) (0xc0018c0000) Stream removed, broadcasting: 1 I0501 15:49:38.022829 7 log.go:172] (0xc005dc69a0) (0xc0018c0140) Stream removed, broadcasting: 3 I0501 15:49:38.022846 7 log.go:172] (0xc005dc69a0) (0xc0018740a0) Stream removed, broadcasting: 5 May 1 15:49:38.022: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:49:38.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6637" for this suite. • [SLOW TEST:28.973 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1629,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:49:38.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 1 15:49:38.231: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ec3a09d-1af1-47d2-bc30-389a3030cc1f" in namespace "projected-9881" to be "Succeeded or Failed" May 1 15:49:38.235: INFO: Pod "downwardapi-volume-5ec3a09d-1af1-47d2-bc30-389a3030cc1f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.439284ms May 1 15:49:40.729: INFO: Pod "downwardapi-volume-5ec3a09d-1af1-47d2-bc30-389a3030cc1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.497239929s May 1 15:49:42.736: INFO: Pod "downwardapi-volume-5ec3a09d-1af1-47d2-bc30-389a3030cc1f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.504564523s May 1 15:49:44.840: INFO: Pod "downwardapi-volume-5ec3a09d-1af1-47d2-bc30-389a3030cc1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.608274809s STEP: Saw pod success May 1 15:49:44.840: INFO: Pod "downwardapi-volume-5ec3a09d-1af1-47d2-bc30-389a3030cc1f" satisfied condition "Succeeded or Failed" May 1 15:49:44.903: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-5ec3a09d-1af1-47d2-bc30-389a3030cc1f container client-container: STEP: delete the pod May 1 15:49:45.364: INFO: Waiting for pod downwardapi-volume-5ec3a09d-1af1-47d2-bc30-389a3030cc1f to disappear May 1 15:49:45.576: INFO: Pod downwardapi-volume-5ec3a09d-1af1-47d2-bc30-389a3030cc1f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:49:45.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9881" for this suite. • [SLOW TEST:7.745 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1672,"failed":0} SSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:49:45.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-4663/configmap-test-5be32202-4dc1-4554-9296-ba9700b51307 STEP: Creating a pod to test consume configMaps May 1 15:49:47.008: INFO: Waiting up to 5m0s for pod "pod-configmaps-94897447-bd10-496c-94c3-2693870c8a7d" in namespace "configmap-4663" to be "Succeeded or Failed" May 1 15:49:47.498: INFO: Pod "pod-configmaps-94897447-bd10-496c-94c3-2693870c8a7d": Phase="Pending", Reason="", readiness=false. Elapsed: 490.244624ms May 1 15:49:49.502: INFO: Pod "pod-configmaps-94897447-bd10-496c-94c3-2693870c8a7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493946962s May 1 15:49:51.546: INFO: Pod "pod-configmaps-94897447-bd10-496c-94c3-2693870c8a7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.537621873s May 1 15:49:53.577: INFO: Pod "pod-configmaps-94897447-bd10-496c-94c3-2693870c8a7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.569051318s STEP: Saw pod success May 1 15:49:53.577: INFO: Pod "pod-configmaps-94897447-bd10-496c-94c3-2693870c8a7d" satisfied condition "Succeeded or Failed" May 1 15:49:53.580: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-94897447-bd10-496c-94c3-2693870c8a7d container env-test: STEP: delete the pod May 1 15:49:53.889: INFO: Waiting for pod pod-configmaps-94897447-bd10-496c-94c3-2693870c8a7d to disappear May 1 15:49:53.900: INFO: Pod pod-configmaps-94897447-bd10-496c-94c3-2693870c8a7d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:49:53.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4663" for this suite. • [SLOW TEST:8.131 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1681,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:49:53.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-dvv6d in namespace proxy-3813 I0501 15:49:54.631893 7 runners.go:190] Created replication controller with name: proxy-service-dvv6d, namespace: proxy-3813, replica count: 1 I0501 15:49:55.682301 7 runners.go:190] proxy-service-dvv6d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 15:49:56.682539 7 runners.go:190] proxy-service-dvv6d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 15:49:57.682769 7 runners.go:190] proxy-service-dvv6d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 15:49:58.682992 7 runners.go:190] proxy-service-dvv6d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 15:49:59.683185 7 runners.go:190] proxy-service-dvv6d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 15:50:00.683359 7 runners.go:190] proxy-service-dvv6d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 15:50:01.683528 7 runners.go:190] proxy-service-dvv6d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 15:50:02.683702 7 runners.go:190] proxy-service-dvv6d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0501 15:50:03.683873 7 runners.go:190] proxy-service-dvv6d Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 1 15:50:03.687: INFO: setup took 9.218332278s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 1 15:50:03.708: INFO: (0) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:1080/proxy/: test<... (200; 20.930509ms) May 1 15:50:03.708: INFO: (0) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:1080/proxy/: ... (200; 21.034633ms) May 1 15:50:03.708: INFO: (0) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 21.191451ms) May 1 15:50:03.708: INFO: (0) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 21.331638ms) May 1 15:50:03.715: INFO: (0) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname1/proxy/: foo (200; 27.881821ms) May 1 15:50:03.715: INFO: (0) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname1/proxy/: foo (200; 27.89953ms) May 1 15:50:03.715: INFO: (0) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 28.056558ms) May 1 15:50:03.715: INFO: (0) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname2/proxy/: bar (200; 28.044096ms) May 1 15:50:03.715: INFO: (0) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 28.102196ms) May 1 15:50:03.715: INFO: (0) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 28.339214ms) May 1 15:50:03.716: INFO: (0) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 28.856193ms) May 1 15:50:03.718: INFO: (0) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname2/proxy/: tls qux (200; 31.071815ms) May 1 15:50:03.718: INFO: (0) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 31.063389ms) May 1 15:50:03.721: INFO: (0) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname1/proxy/: tls baz (200; 34.299256ms) May 1 15:50:03.721: INFO: (0) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:460/proxy/: tls baz (200; 34.484057ms) May 1 15:50:03.724: INFO: (0) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: ... (200; 8.081604ms) May 1 15:50:03.734: INFO: (1) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 8.399611ms) May 1 15:50:03.734: INFO: (1) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 8.302734ms) May 1 15:50:03.734: INFO: (1) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: test<... (200; 8.820817ms) May 1 15:50:03.735: INFO: (1) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:460/proxy/: tls baz (200; 9.78874ms) May 1 15:50:03.735: INFO: (1) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 9.475962ms) May 1 15:50:03.736: INFO: (1) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname1/proxy/: foo (200; 11.197405ms) May 1 15:50:03.736: INFO: (1) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 10.923661ms) May 1 15:50:03.736: INFO: (1) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname2/proxy/: bar (200; 11.025993ms) May 1 15:50:03.736: INFO: (1) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname1/proxy/: tls baz (200; 11.002387ms) May 1 15:50:03.736: INFO: (1) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname2/proxy/: tls qux (200; 10.930592ms) May 1 15:50:03.739: INFO: (2) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 2.513645ms) May 1 15:50:03.740: INFO: (2) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:1080/proxy/: ... (200; 4.02763ms) May 1 15:50:03.740: INFO: (2) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 4.04689ms) May 1 15:50:03.740: INFO: (2) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 4.032206ms) May 1 15:50:03.741: INFO: (2) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:460/proxy/: tls baz (200; 4.082447ms) May 1 15:50:03.741: INFO: (2) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 4.430169ms) May 1 15:50:03.741: INFO: (2) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 4.659988ms) May 1 15:50:03.741: INFO: (2) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:1080/proxy/: test<... (200; 4.671268ms) May 1 15:50:03.741: INFO: (2) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 4.724643ms) May 1 15:50:03.742: INFO: (2) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 5.616071ms) May 1 15:50:03.742: INFO: (2) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname1/proxy/: foo (200; 5.535907ms) May 1 15:50:03.742: INFO: (2) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname1/proxy/: foo (200; 5.803984ms) May 1 15:50:03.742: INFO: (2) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname2/proxy/: bar (200; 6.034221ms) May 1 15:50:03.742: INFO: (2) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname1/proxy/: tls baz (200; 6.040232ms) May 1 15:50:03.740: INFO: (2) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: ... (200; 4.516871ms) May 1 15:50:03.750: INFO: (3) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 4.628133ms) May 1 15:50:03.750: INFO: (3) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 4.616468ms) May 1 15:50:03.750: INFO: (3) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 4.628295ms) May 1 15:50:03.750: INFO: (3) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 5.149785ms) May 1 15:50:03.750: INFO: (3) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 5.267117ms) May 1 15:50:03.750: INFO: (3) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname1/proxy/: tls baz (200; 5.256722ms) May 1 15:50:03.750: INFO: (3) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 5.241906ms) May 1 15:50:03.750: INFO: (3) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:1080/proxy/: test<... (200; 5.3309ms) May 1 15:50:03.750: INFO: (3) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:460/proxy/: tls baz (200; 5.280842ms) May 1 15:50:03.750: INFO: (3) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname2/proxy/: tls qux (200; 5.308121ms) May 1 15:50:03.750: INFO: (3) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 5.460337ms) May 1 15:50:03.750: INFO: (3) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname1/proxy/: foo (200; 5.443973ms) May 1 15:50:03.751: INFO: (3) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: ... (200; 4.095151ms) May 1 15:50:03.755: INFO: (4) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 4.177059ms) May 1 15:50:03.755: INFO: (4) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:460/proxy/: tls baz (200; 4.205802ms) May 1 15:50:03.755: INFO: (4) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 4.343877ms) May 1 15:50:03.755: INFO: (4) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 4.278444ms) May 1 15:50:03.755: INFO: (4) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 4.377668ms) May 1 15:50:03.755: INFO: (4) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 4.357856ms) May 1 15:50:03.755: INFO: (4) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:1080/proxy/: test<... (200; 4.376545ms) May 1 15:50:03.755: INFO: (4) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname2/proxy/: bar (200; 4.407706ms) May 1 15:50:03.756: INFO: (4) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: ... (200; 5.118098ms) May 1 15:50:03.763: INFO: (5) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname2/proxy/: bar (200; 5.220477ms) May 1 15:50:03.763: INFO: (5) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname1/proxy/: foo (200; 5.11409ms) May 1 15:50:03.763: INFO: (5) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 5.173951ms) May 1 15:50:03.763: INFO: (5) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 5.251499ms) May 1 15:50:03.763: INFO: (5) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname1/proxy/: foo (200; 5.310252ms) May 1 15:50:03.764: INFO: (5) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 5.656119ms) May 1 15:50:03.764: INFO: (5) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:1080/proxy/: test<... (200; 5.733372ms) May 1 15:50:03.764: INFO: (5) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 5.672713ms) May 1 15:50:03.764: INFO: (5) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:460/proxy/: tls baz (200; 5.697558ms) May 1 15:50:03.764: INFO: (5) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 5.706435ms) May 1 15:50:03.764: INFO: (5) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: test (200; 5.160352ms) May 1 15:50:03.769: INFO: (6) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname1/proxy/: foo (200; 5.170425ms) May 1 15:50:03.769: INFO: (6) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname1/proxy/: tls baz (200; 5.225576ms) May 1 15:50:03.769: INFO: (6) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 5.413778ms) May 1 15:50:03.770: INFO: (6) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname2/proxy/: bar (200; 5.581871ms) May 1 15:50:03.770: INFO: (6) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: ... (200; 5.819028ms) May 1 15:50:03.770: INFO: (6) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:1080/proxy/: test<... (200; 6.264811ms) May 1 15:50:03.770: INFO: (6) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname2/proxy/: tls qux (200; 6.279895ms) May 1 15:50:03.770: INFO: (6) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 6.30885ms) May 1 15:50:03.781: INFO: (7) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 10.52221ms) May 1 15:50:03.781: INFO: (7) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 10.546842ms) May 1 15:50:03.781: INFO: (7) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 10.590117ms) May 1 15:50:03.781: INFO: (7) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname1/proxy/: foo (200; 10.677687ms) May 1 15:50:03.781: INFO: (7) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname2/proxy/: tls qux (200; 10.599599ms) May 1 15:50:03.781: INFO: (7) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname2/proxy/: bar (200; 10.627436ms) May 1 15:50:03.781: INFO: (7) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 10.928387ms) May 1 15:50:03.781: INFO: (7) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 10.863264ms) May 1 15:50:03.781: INFO: (7) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:1080/proxy/: ... (200; 10.938461ms) May 1 15:50:03.782: INFO: (7) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 11.043506ms) May 1 15:50:03.782: INFO: (7) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname1/proxy/: foo (200; 11.034744ms) May 1 15:50:03.782: INFO: (7) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:1080/proxy/: test<... (200; 11.110826ms) May 1 15:50:03.782: INFO: (7) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: test<... (200; 143.583786ms) May 1 15:50:03.926: INFO: (8) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:1080/proxy/: ... (200; 143.657801ms) May 1 15:50:03.927: INFO: (8) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 144.825514ms) May 1 15:50:03.927: INFO: (8) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 144.887356ms) May 1 15:50:03.928: INFO: (8) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 144.904566ms) May 1 15:50:03.928: INFO: (8) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname1/proxy/: foo (200; 145.475581ms) May 1 15:50:03.928: INFO: (8) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname1/proxy/: foo (200; 145.498866ms) May 1 15:50:03.928: INFO: (8) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname2/proxy/: tls qux (200; 145.759641ms) May 1 15:50:03.929: INFO: (8) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 145.805801ms) May 1 15:50:03.929: INFO: (8) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname1/proxy/: tls baz (200; 145.800343ms) May 1 15:50:03.929: INFO: (8) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname2/proxy/: bar (200; 146.447141ms) May 1 15:50:03.929: INFO: (8) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 146.383044ms) May 1 15:50:03.929: INFO: (8) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: ... (200; 5.746232ms) May 1 15:50:03.935: INFO: (9) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 5.774832ms) May 1 15:50:03.935: INFO: (9) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:460/proxy/: tls baz (200; 5.952416ms) May 1 15:50:03.935: INFO: (9) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 5.870531ms) May 1 15:50:03.935: INFO: (9) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 5.868395ms) May 1 15:50:03.935: INFO: (9) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 5.816656ms) May 1 15:50:03.935: INFO: (9) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:1080/proxy/: test<... (200; 5.828318ms) May 1 15:50:03.935: INFO: (9) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname2/proxy/: bar (200; 6.052097ms) May 1 15:50:03.936: INFO: (9) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname2/proxy/: tls qux (200; 6.005914ms) May 1 15:50:03.936: INFO: (9) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 6.085914ms) May 1 15:50:03.936: INFO: (9) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 6.672069ms) May 1 15:50:03.936: INFO: (9) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname1/proxy/: foo (200; 6.695692ms) May 1 15:50:03.937: INFO: (9) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname1/proxy/: foo (200; 7.06584ms) May 1 15:50:03.937: INFO: (9) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname1/proxy/: tls baz (200; 7.140165ms) May 1 15:50:03.943: INFO: (10) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 6.30025ms) May 1 15:50:03.943: INFO: (10) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 6.32229ms) May 1 15:50:03.943: INFO: (10) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 6.316785ms) May 1 15:50:03.943: INFO: (10) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 6.319823ms) May 1 15:50:03.943: INFO: (10) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 6.377478ms) May 1 15:50:03.943: INFO: (10) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:1080/proxy/: ... (200; 6.389025ms) May 1 15:50:03.943: INFO: (10) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:460/proxy/: tls baz (200; 6.453219ms) May 1 15:50:03.943: INFO: (10) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 6.510003ms) May 1 15:50:03.943: INFO: (10) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: test<... (200; 6.421251ms) May 1 15:50:03.943: INFO: (10) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname1/proxy/: foo (200; 6.473779ms) May 1 15:50:03.944: INFO: (10) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname1/proxy/: foo (200; 6.787145ms) May 1 15:50:03.944: INFO: (10) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname2/proxy/: tls qux (200; 6.771501ms) May 1 15:50:03.944: INFO: (10) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname1/proxy/: tls baz (200; 7.0289ms) May 1 15:50:03.944: INFO: (10) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname2/proxy/: bar (200; 7.045733ms) May 1 15:50:03.944: INFO: (10) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 6.849375ms) May 1 15:50:03.946: INFO: (11) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: test<... (200; 5.715842ms) May 1 15:50:03.949: INFO: (11) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 5.801608ms) May 1 15:50:03.950: INFO: (11) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:1080/proxy/: ... (200; 5.908313ms) May 1 15:50:03.950: INFO: (11) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname1/proxy/: tls baz (200; 6.221986ms) May 1 15:50:03.950: INFO: (11) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 6.474105ms) May 1 15:50:03.950: INFO: (11) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 6.552716ms) May 1 15:50:03.950: INFO: (11) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 6.480539ms) May 1 15:50:03.950: INFO: (11) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname1/proxy/: foo (200; 6.730886ms) May 1 15:50:03.950: INFO: (11) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname2/proxy/: tls qux (200; 6.658982ms) May 1 15:50:03.950: INFO: (11) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 6.665786ms) May 1 15:50:03.950: INFO: (11) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname1/proxy/: foo (200; 6.714718ms) May 1 15:50:03.951: INFO: (11) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname2/proxy/: bar (200; 6.947232ms) May 1 15:50:03.954: INFO: (12) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:1080/proxy/: ... (200; 3.541654ms) May 1 15:50:03.955: INFO: (12) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 3.910719ms) May 1 15:50:03.955: INFO: (12) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 3.933978ms) May 1 15:50:03.955: INFO: (12) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 3.970808ms) May 1 15:50:03.955: INFO: (12) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:460/proxy/: tls baz (200; 4.221176ms) May 1 15:50:03.955: INFO: (12) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 4.342078ms) May 1 15:50:03.955: INFO: (12) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 4.28014ms) May 1 15:50:03.955: INFO: (12) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: test<... (200; 7.381001ms) May 1 15:50:03.958: INFO: (12) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 7.579681ms) May 1 15:50:03.958: INFO: (12) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname2/proxy/: tls qux (200; 7.696186ms) May 1 15:50:03.962: INFO: (13) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: test<... (200; 5.228563ms) May 1 15:50:03.964: INFO: (13) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 5.188673ms) May 1 15:50:03.964: INFO: (13) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname2/proxy/: tls qux (200; 5.211497ms) May 1 15:50:03.964: INFO: (13) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 5.211415ms) May 1 15:50:03.964: INFO: (13) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 5.39426ms) May 1 15:50:03.964: INFO: (13) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:1080/proxy/: ... (200; 5.424759ms) May 1 15:50:03.964: INFO: (13) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 5.376556ms) May 1 15:50:03.964: INFO: (13) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname1/proxy/: foo (200; 5.425614ms) May 1 15:50:03.964: INFO: (13) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 5.530805ms) May 1 15:50:03.964: INFO: (13) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:460/proxy/: tls baz (200; 5.699714ms) May 1 15:50:03.964: INFO: (13) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 5.565432ms) May 1 15:50:03.967: INFO: (14) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 2.455223ms) May 1 15:50:03.967: INFO: (14) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 2.37973ms) May 1 15:50:03.968: INFO: (14) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 3.098753ms) May 1 15:50:03.969: INFO: (14) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:1080/proxy/: ... (200; 3.55264ms) May 1 15:50:03.969: INFO: (14) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 3.923239ms) May 1 15:50:03.970: INFO: (14) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname1/proxy/: foo (200; 5.073655ms) May 1 15:50:03.970: INFO: (14) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 4.90906ms) May 1 15:50:03.970: INFO: (14) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 4.800506ms) May 1 15:50:03.970: INFO: (14) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: test<... (200; 4.740826ms) May 1 15:50:03.971: INFO: (14) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname2/proxy/: bar (200; 5.662376ms) May 1 15:50:03.971: INFO: (14) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname2/proxy/: tls qux (200; 5.549045ms) May 1 15:50:03.975: INFO: (15) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:460/proxy/: tls baz (200; 4.189776ms) May 1 15:50:03.975: INFO: (15) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 4.226493ms) May 1 15:50:03.975: INFO: (15) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:1080/proxy/: ... (200; 4.236706ms) May 1 15:50:03.975: INFO: (15) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 4.280805ms) May 1 15:50:03.975: INFO: (15) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 4.3478ms) May 1 15:50:03.975: INFO: (15) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 4.399819ms) May 1 15:50:03.975: INFO: (15) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 4.303366ms) May 1 15:50:03.975: INFO: (15) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 4.338217ms) May 1 15:50:03.975: INFO: (15) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:1080/proxy/: test<... (200; 4.380719ms) May 1 15:50:03.975: INFO: (15) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: test<... (200; 2.445078ms) May 1 15:50:03.979: INFO: (16) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 2.546705ms) May 1 15:50:03.980: INFO: (16) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 2.553171ms) May 1 15:50:03.981: INFO: (16) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:460/proxy/: tls baz (200; 4.10267ms) May 1 15:50:03.981: INFO: (16) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 4.319787ms) May 1 15:50:03.981: INFO: (16) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname2/proxy/: bar (200; 4.3425ms) May 1 15:50:03.981: INFO: (16) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 4.296095ms) May 1 15:50:03.981: INFO: (16) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 4.353914ms) May 1 15:50:03.981: INFO: (16) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: ... (200; 4.613624ms) May 1 15:50:03.982: INFO: (16) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname1/proxy/: tls baz (200; 5.455419ms) May 1 15:50:03.983: INFO: (16) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname2/proxy/: tls qux (200; 5.591954ms) May 1 15:50:03.983: INFO: (16) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname1/proxy/: foo (200; 5.557355ms) May 1 15:50:03.983: INFO: (16) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 5.594581ms) May 1 15:50:03.983: INFO: (16) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname1/proxy/: foo (200; 5.560626ms) May 1 15:50:03.988: INFO: (17) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 4.997435ms) May 1 15:50:03.988: INFO: (17) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 5.262085ms) May 1 15:50:03.991: INFO: (17) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 8.198902ms) May 1 15:50:03.991: INFO: (17) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname2/proxy/: bar (200; 8.259086ms) May 1 15:50:03.991: INFO: (17) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:460/proxy/: tls baz (200; 8.316188ms) May 1 15:50:03.991: INFO: (17) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 8.358766ms) May 1 15:50:03.991: INFO: (17) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:1080/proxy/: test<... (200; 8.389545ms) May 1 15:50:03.991: INFO: (17) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 8.574388ms) May 1 15:50:03.991: INFO: (17) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 8.630505ms) May 1 15:50:03.991: INFO: (17) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname1/proxy/: foo (200; 8.677507ms) May 1 15:50:03.991: INFO: (17) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 8.699775ms) May 1 15:50:03.991: INFO: (17) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: ... (200; 10.468198ms) May 1 15:50:03.998: INFO: (18) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 4.37936ms) May 1 15:50:03.998: INFO: (18) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 4.283758ms) May 1 15:50:03.998: INFO: (18) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:162/proxy/: bar (200; 4.340812ms) May 1 15:50:03.998: INFO: (18) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:1080/proxy/: ... (200; 4.433843ms) May 1 15:50:03.998: INFO: (18) /api/v1/namespaces/proxy-3813/pods/http:proxy-service-dvv6d-9q67t:160/proxy/: foo (200; 4.860283ms) May 1 15:50:03.998: INFO: (18) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname1/proxy/: foo (200; 4.837585ms) May 1 15:50:03.998: INFO: (18) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:1080/proxy/: test<... (200; 4.937872ms) May 1 15:50:03.998: INFO: (18) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t/proxy/: test (200; 4.949715ms) May 1 15:50:03.998: INFO: (18) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname1/proxy/: tls baz (200; 5.089494ms) May 1 15:50:03.998: INFO: (18) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname1/proxy/: foo (200; 5.038706ms) May 1 15:50:03.998: INFO: (18) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 5.257349ms) May 1 15:50:03.998: INFO: (18) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: ... (200; 5.527227ms) May 1 15:50:04.005: INFO: (19) /api/v1/namespaces/proxy-3813/services/proxy-service-dvv6d:portname2/proxy/: bar (200; 5.647584ms) May 1 15:50:04.005: INFO: (19) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:443/proxy/: test (200; 5.847724ms) May 1 15:50:04.005: INFO: (19) /api/v1/namespaces/proxy-3813/pods/https:proxy-service-dvv6d-9q67t:462/proxy/: tls qux (200; 5.813966ms) May 1 15:50:04.005: INFO: (19) /api/v1/namespaces/proxy-3813/pods/proxy-service-dvv6d-9q67t:1080/proxy/: test<... (200; 5.882203ms) May 1 15:50:04.005: INFO: (19) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname2/proxy/: tls qux (200; 6.09748ms) May 1 15:50:04.005: INFO: (19) /api/v1/namespaces/proxy-3813/services/https:proxy-service-dvv6d:tlsportname1/proxy/: tls baz (200; 6.169329ms) May 1 15:50:04.005: INFO: (19) /api/v1/namespaces/proxy-3813/services/http:proxy-service-dvv6d:portname2/proxy/: bar (200; 6.363818ms) STEP: deleting ReplicationController proxy-service-dvv6d in namespace proxy-3813, will wait for the garbage collector to delete the pods May 1 15:50:04.064: INFO: Deleting ReplicationController proxy-service-dvv6d took: 6.931157ms May 1 15:50:04.364: INFO: Terminating ReplicationController proxy-service-dvv6d pods took: 300.18819ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:50:13.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3813" for this suite. • [SLOW TEST:19.766 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":275,"completed":93,"skipped":1688,"failed":0} [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:50:13.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 1 15:50:13.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1496' May 1 15:50:14.071: INFO: stderr: "" May 1 15:50:14.071: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 May 1 15:50:14.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1496' May 1 15:50:19.072: INFO: stderr: "" May 1 15:50:19.072: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:50:19.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1496" for this suite. • [SLOW TEST:5.641 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":94,"skipped":1688,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:50:19.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-94266a9c-61e2-4ebc-af23-03b56a06c151 STEP: Creating a pod to test consume secrets May 1 15:50:20.269: INFO: Waiting up to 5m0s for pod "pod-secrets-c3c43cdf-a739-46ea-835a-ab486c30ce08" in namespace "secrets-9622" to be "Succeeded or Failed" May 1 15:50:20.333: INFO: Pod "pod-secrets-c3c43cdf-a739-46ea-835a-ab486c30ce08": Phase="Pending", Reason="", readiness=false. Elapsed: 64.734447ms May 1 15:50:22.425: INFO: Pod "pod-secrets-c3c43cdf-a739-46ea-835a-ab486c30ce08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156766971s May 1 15:50:24.429: INFO: Pod "pod-secrets-c3c43cdf-a739-46ea-835a-ab486c30ce08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.159996875s STEP: Saw pod success May 1 15:50:24.429: INFO: Pod "pod-secrets-c3c43cdf-a739-46ea-835a-ab486c30ce08" satisfied condition "Succeeded or Failed" May 1 15:50:24.431: INFO: Trying to get logs from node kali-worker pod pod-secrets-c3c43cdf-a739-46ea-835a-ab486c30ce08 container secret-volume-test: STEP: delete the pod May 1 15:50:24.895: INFO: Waiting for pod pod-secrets-c3c43cdf-a739-46ea-835a-ab486c30ce08 to disappear May 1 15:50:24.952: INFO: Pod pod-secrets-c3c43cdf-a739-46ea-835a-ab486c30ce08 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:50:24.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9622" for this suite. STEP: Destroying namespace "secret-namespace-1163" for this suite. • [SLOW TEST:5.662 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1702,"failed":0} [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:50:24.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5681 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5681 STEP: creating replication controller externalsvc in namespace services-5681 I0501 15:50:25.508209 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5681, replica count: 2 I0501 15:50:28.558655 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 15:50:31.558846 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 1 15:50:31.625: INFO: Creating new exec pod May 1 15:50:35.720: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5681 execpod9rgs4 -- /bin/sh -x -c nslookup nodeport-service' May 1 15:50:39.218: INFO: stderr: "I0501 15:50:39.002327 1974 log.go:172] (0xc000888000) (0xc000784000) Create stream\nI0501 15:50:39.002377 1974 log.go:172] (0xc000888000) (0xc000784000) Stream added, broadcasting: 1\nI0501 15:50:39.006049 1974 log.go:172] (0xc000888000) Reply frame received for 1\nI0501 15:50:39.006100 1974 log.go:172] (0xc000888000) (0xc00080e000) Create stream\nI0501 15:50:39.006113 1974 log.go:172] (0xc000888000) (0xc00080e000) Stream added, broadcasting: 3\nI0501 15:50:39.007275 1974 log.go:172] (0xc000888000) Reply frame received for 3\nI0501 15:50:39.007310 1974 log.go:172] (0xc000888000) (0xc00080e0a0) Create stream\nI0501 15:50:39.007326 1974 log.go:172] (0xc000888000) (0xc00080e0a0) Stream added, broadcasting: 5\nI0501 15:50:39.008289 1974 log.go:172] (0xc000888000) Reply frame received for 5\nI0501 15:50:39.135321 1974 log.go:172] (0xc000888000) Data frame received for 5\nI0501 15:50:39.135356 1974 log.go:172] (0xc00080e0a0) (5) Data frame handling\nI0501 15:50:39.135379 1974 log.go:172] (0xc00080e0a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0501 15:50:39.210723 1974 log.go:172] (0xc000888000) Data frame received for 3\nI0501 15:50:39.210776 1974 log.go:172] (0xc00080e000) (3) Data frame handling\nI0501 15:50:39.210815 1974 log.go:172] (0xc00080e000) (3) Data frame sent\nI0501 15:50:39.211721 1974 log.go:172] (0xc000888000) Data frame received for 3\nI0501 15:50:39.211753 1974 log.go:172] (0xc00080e000) (3) Data frame handling\nI0501 15:50:39.211783 1974 log.go:172] (0xc00080e000) (3) Data frame sent\nI0501 15:50:39.212352 1974 log.go:172] (0xc000888000) Data frame received for 3\nI0501 15:50:39.212390 1974 log.go:172] (0xc00080e000) (3) Data frame handling\nI0501 15:50:39.213002 1974 log.go:172] (0xc000888000) Data frame received for 5\nI0501 15:50:39.213021 1974 log.go:172] (0xc00080e0a0) (5) Data frame handling\nI0501 15:50:39.214734 1974 log.go:172] (0xc000888000) Data frame received for 1\nI0501 15:50:39.214753 1974 log.go:172] (0xc000784000) (1) Data frame handling\nI0501 15:50:39.214762 1974 log.go:172] (0xc000784000) (1) Data frame sent\nI0501 15:50:39.214771 1974 log.go:172] (0xc000888000) (0xc000784000) Stream removed, broadcasting: 1\nI0501 15:50:39.214878 1974 log.go:172] (0xc000888000) Go away received\nI0501 15:50:39.215064 1974 log.go:172] (0xc000888000) (0xc000784000) Stream removed, broadcasting: 1\nI0501 15:50:39.215082 1974 log.go:172] (0xc000888000) (0xc00080e000) Stream removed, broadcasting: 3\nI0501 15:50:39.215092 1974 log.go:172] (0xc000888000) (0xc00080e0a0) Stream removed, broadcasting: 5\n" May 1 15:50:39.219: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5681.svc.cluster.local\tcanonical name = externalsvc.services-5681.svc.cluster.local.\nName:\texternalsvc.services-5681.svc.cluster.local\nAddress: 10.97.160.32\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5681, will wait for the garbage collector to delete the pods May 1 15:50:39.284: INFO: Deleting ReplicationController externalsvc took: 5.891229ms May 1 15:50:39.384: INFO: Terminating ReplicationController externalsvc pods took: 100.157314ms May 1 15:50:53.836: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:50:53.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5681" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:28.887 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":96,"skipped":1702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:50:53.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:51:00.754: INFO: Waiting up to 5m0s for pod "client-envvars-6aef0ece-b4f5-4c09-bc4d-f4991891663f" in namespace "pods-2058" to be "Succeeded or Failed" May 1 15:51:00.817: INFO: Pod "client-envvars-6aef0ece-b4f5-4c09-bc4d-f4991891663f": Phase="Pending", Reason="", readiness=false. Elapsed: 63.113379ms May 1 15:51:02.846: INFO: Pod "client-envvars-6aef0ece-b4f5-4c09-bc4d-f4991891663f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091927386s May 1 15:51:04.863: INFO: Pod "client-envvars-6aef0ece-b4f5-4c09-bc4d-f4991891663f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109129716s STEP: Saw pod success May 1 15:51:04.863: INFO: Pod "client-envvars-6aef0ece-b4f5-4c09-bc4d-f4991891663f" satisfied condition "Succeeded or Failed" May 1 15:51:04.866: INFO: Trying to get logs from node kali-worker pod client-envvars-6aef0ece-b4f5-4c09-bc4d-f4991891663f container env3cont: STEP: delete the pod May 1 15:51:04.910: INFO: Waiting for pod client-envvars-6aef0ece-b4f5-4c09-bc4d-f4991891663f to disappear May 1 15:51:04.948: INFO: Pod client-envvars-6aef0ece-b4f5-4c09-bc4d-f4991891663f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:51:04.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2058" for this suite. • [SLOW TEST:11.130 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1778,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:51:04.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:51:05.342: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 1 15:51:08.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6724 create -f -' May 1 15:51:14.071: INFO: stderr: "" May 1 15:51:14.071: INFO: stdout: "e2e-test-crd-publish-openapi-6015-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 1 15:51:14.071: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6724 delete e2e-test-crd-publish-openapi-6015-crds test-cr' May 1 15:51:14.202: INFO: stderr: "" May 1 15:51:14.202: INFO: stdout: "e2e-test-crd-publish-openapi-6015-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 1 15:51:14.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6724 apply -f -' May 1 15:51:14.463: INFO: stderr: "" May 1 15:51:14.463: INFO: stdout: "e2e-test-crd-publish-openapi-6015-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 1 15:51:14.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6724 delete e2e-test-crd-publish-openapi-6015-crds test-cr' May 1 15:51:14.568: INFO: stderr: "" May 1 15:51:14.568: INFO: stdout: "e2e-test-crd-publish-openapi-6015-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 1 15:51:14.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6015-crds' May 1 15:51:14.832: INFO: stderr: "" May 1 15:51:14.832: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6015-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:51:17.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6724" for this suite. • [SLOW TEST:12.854 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":98,"skipped":1786,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:51:17.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller May 1 15:51:18.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8693' May 1 15:51:18.529: INFO: stderr: "" May 1 15:51:18.529: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 1 15:51:18.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8693' May 1 15:51:18.683: INFO: stderr: "" May 1 15:51:18.683: INFO: stdout: "update-demo-nautilus-5sw84 update-demo-nautilus-bxjbz " May 1 15:51:18.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5sw84 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8693' May 1 15:51:18.797: INFO: stderr: "" May 1 15:51:18.797: INFO: stdout: "" May 1 15:51:18.797: INFO: update-demo-nautilus-5sw84 is created but not running May 1 15:51:23.797: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8693' May 1 15:51:23.902: INFO: stderr: "" May 1 15:51:23.902: INFO: stdout: "update-demo-nautilus-5sw84 update-demo-nautilus-bxjbz " May 1 15:51:23.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5sw84 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8693' May 1 15:51:23.998: INFO: stderr: "" May 1 15:51:23.998: INFO: stdout: "true" May 1 15:51:23.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5sw84 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8693' May 1 15:51:24.268: INFO: stderr: "" May 1 15:51:24.268: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:51:24.268: INFO: validating pod update-demo-nautilus-5sw84 May 1 15:51:24.273: INFO: got data: { "image": "nautilus.jpg" } May 1 15:51:24.273: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:51:24.273: INFO: update-demo-nautilus-5sw84 is verified up and running May 1 15:51:24.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bxjbz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8693' May 1 15:51:24.387: INFO: stderr: "" May 1 15:51:24.387: INFO: stdout: "true" May 1 15:51:24.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bxjbz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8693' May 1 15:51:24.478: INFO: stderr: "" May 1 15:51:24.478: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 1 15:51:24.478: INFO: validating pod update-demo-nautilus-bxjbz May 1 15:51:24.482: INFO: got data: { "image": "nautilus.jpg" } May 1 15:51:24.482: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 1 15:51:24.482: INFO: update-demo-nautilus-bxjbz is verified up and running STEP: using delete to clean up resources May 1 15:51:24.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8693' May 1 15:51:24.631: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 15:51:24.631: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 1 15:51:24.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8693' May 1 15:51:24.768: INFO: stderr: "No resources found in kubectl-8693 namespace.\n" May 1 15:51:24.768: INFO: stdout: "" May 1 15:51:24.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8693 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 15:51:24.929: INFO: stderr: "" May 1 15:51:24.929: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:51:24.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8693" for this suite. • [SLOW TEST:7.121 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":99,"skipped":1832,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:51:24.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 1 15:51:33.305: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 15:51:33.355: INFO: Pod pod-with-prestop-exec-hook still exists May 1 15:51:35.356: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 15:51:35.360: INFO: Pod pod-with-prestop-exec-hook still exists May 1 15:51:37.356: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 15:51:37.379: INFO: Pod pod-with-prestop-exec-hook still exists May 1 15:51:39.356: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 1 15:51:39.360: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:51:39.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2435" for this suite. • [SLOW TEST:14.403 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":100,"skipped":1834,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:51:39.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 1 15:51:45.535: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7418 PodName:pod-sharedvolume-2ce9df59-26d4-4312-bad0-83789b0a99b8 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 15:51:45.535: INFO: >>> kubeConfig: /root/.kube/config I0501 15:51:45.572239 7 log.go:172] (0xc002a74630) (0xc0015528c0) Create stream I0501 15:51:45.572290 7 log.go:172] (0xc002a74630) (0xc0015528c0) Stream added, broadcasting: 1 I0501 15:51:45.574756 7 log.go:172] (0xc002a74630) Reply frame received for 1 I0501 15:51:45.574821 7 log.go:172] (0xc002a74630) (0xc000262b40) Create stream I0501 15:51:45.574841 7 log.go:172] (0xc002a74630) (0xc000262b40) Stream added, broadcasting: 3 I0501 15:51:45.576000 7 log.go:172] (0xc002a74630) Reply frame received for 3 I0501 15:51:45.576044 7 log.go:172] (0xc002a74630) (0xc0012cc140) Create stream I0501 15:51:45.576065 7 log.go:172] (0xc002a74630) (0xc0012cc140) Stream added, broadcasting: 5 I0501 15:51:45.577056 7 log.go:172] (0xc002a74630) Reply frame received for 5 I0501 15:51:45.643663 7 log.go:172] (0xc002a74630) Data frame received for 5 I0501 15:51:45.643698 7 log.go:172] (0xc0012cc140) (5) Data frame handling I0501 15:51:45.643722 7 log.go:172] (0xc002a74630) Data frame received for 3 I0501 15:51:45.643736 7 log.go:172] (0xc000262b40) (3) Data frame handling I0501 15:51:45.643758 7 log.go:172] (0xc000262b40) (3) Data frame sent I0501 15:51:45.643781 7 log.go:172] (0xc002a74630) Data frame received for 3 I0501 15:51:45.643795 7 log.go:172] (0xc000262b40) (3) Data frame handling I0501 15:51:45.645545 7 log.go:172] (0xc002a74630) Data frame received for 1 I0501 15:51:45.645591 7 log.go:172] (0xc0015528c0) (1) Data frame handling I0501 15:51:45.645642 7 log.go:172] (0xc0015528c0) (1) Data frame sent I0501 15:51:45.645718 7 log.go:172] (0xc002a74630) (0xc0015528c0) Stream removed, broadcasting: 1 I0501 15:51:45.645753 7 log.go:172] (0xc002a74630) Go away received I0501 15:51:45.645856 7 log.go:172] (0xc002a74630) (0xc0015528c0) Stream removed, broadcasting: 1 I0501 15:51:45.645942 7 log.go:172] (0xc002a74630) (0xc000262b40) Stream removed, broadcasting: 3 I0501 15:51:45.645962 7 log.go:172] (0xc002a74630) (0xc0012cc140) Stream removed, broadcasting: 5 May 1 15:51:45.645: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:51:45.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7418" for this suite. • [SLOW TEST:6.283 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":101,"skipped":1869,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:51:45.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:51:45.728: INFO: Creating deployment "webserver-deployment" May 1 15:51:45.733: INFO: Waiting for observed generation 1 May 1 15:51:47.966: INFO: Waiting for all required pods to come up May 1 15:51:47.971: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 1 15:52:00.172: INFO: Waiting for deployment "webserver-deployment" to complete May 1 15:52:00.178: INFO: Updating deployment "webserver-deployment" with a non-existent image May 1 15:52:00.186: INFO: Updating deployment webserver-deployment May 1 15:52:00.186: INFO: Waiting for observed generation 2 May 1 15:52:02.283: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 1 15:52:02.361: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 1 15:52:02.367: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 1 15:52:02.558: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 1 15:52:02.558: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 1 15:52:02.561: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 1 15:52:02.565: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 1 15:52:02.565: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 1 15:52:02.571: INFO: Updating deployment webserver-deployment May 1 15:52:02.571: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 1 15:52:02.982: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 1 15:52:06.077: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 May 1 15:52:06.416: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-2283 /apis/apps/v1/namespaces/deployment-2283/deployments/webserver-deployment e5f282a7-017d-4c68-b65c-46958b82478f 661294 3 2020-05-01 15:51:45 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-01 15:52:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-01 15:52:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00465ebb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-01 15:52:02 +0000 UTC,LastTransitionTime:2020-05-01 15:52:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-01 15:52:05 +0000 UTC,LastTransitionTime:2020-05-01 15:51:45 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 1 15:52:06.474: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-2283 /apis/apps/v1/namespaces/deployment-2283/replicasets/webserver-deployment-6676bcd6d4 5b8f84ec-9804-4eaf-aa4f-ee5573f26111 661292 3 2020-05-01 15:52:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment e5f282a7-017d-4c68-b65c-46958b82478f 0xc00465f207 0xc00465f208}] [] [{kube-controller-manager Update apps/v1 2020-05-01 15:52:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 53 102 50 56 50 97 55 45 48 49 55 100 45 52 99 54 56 45 98 54 53 99 45 52 54 57 53 56 98 56 50 52 55 56 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00465f288 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 1 15:52:06.474: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 1 15:52:06.474: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-2283 /apis/apps/v1/namespaces/deployment-2283/replicasets/webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 661272 3 2020-05-01 15:51:45 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment e5f282a7-017d-4c68-b65c-46958b82478f 0xc00465f2e7 0xc00465f2e8}] [] [{kube-controller-manager Update apps/v1 2020-05-01 15:52:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 53 102 50 56 50 97 55 45 48 49 55 100 45 52 99 54 56 45 98 54 53 99 45 52 54 57 53 56 98 56 50 52 55 56 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00465f358 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 1 15:52:06.623: INFO: Pod "webserver-deployment-6676bcd6d4-2jgdm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2jgdm webserver-deployment-6676bcd6d4- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-6676bcd6d4-2jgdm d993542a-61fb-4b84-ba04-5cb9c10ac493 661273 0 2020-05-01 15:52:04 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5b8f84ec-9804-4eaf-aa4f-ee5573f26111 0xc0046caa97 0xc0046caa98}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 98 56 102 56 52 101 99 45 57 56 48 52 45 52 101 97 102 45 97 97 52 102 45 101 101 53 53 55 51 102 50 54 49 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.623: INFO: Pod "webserver-deployment-6676bcd6d4-b94mz" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-b94mz webserver-deployment-6676bcd6d4- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-6676bcd6d4-b94mz 4122f217-3687-4b10-b74a-8fe1f0451974 661190 0 2020-05-01 15:52:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5b8f84ec-9804-4eaf-aa4f-ee5573f26111 0xc0046cac10 0xc0046cac11}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 98 56 102 56 52 101 99 45 57 56 48 52 45 52 101 97 102 45 97 97 52 102 45 101 101 53 53 55 51 102 50 54 49 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-01 15:52:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.624: INFO: Pod "webserver-deployment-6676bcd6d4-gw4f7" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-gw4f7 webserver-deployment-6676bcd6d4- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-6676bcd6d4-gw4f7 8153b14c-faa8-402d-95ae-390beebedf33 661196 0 2020-05-01 15:52:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5b8f84ec-9804-4eaf-aa4f-ee5573f26111 0xc0046cae20 0xc0046cae21}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 98 56 102 56 52 101 99 45 57 56 48 52 45 52 101 97 102 45 97 97 52 102 45 101 101 53 53 55 51 102 50 54 49 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-01 15:52:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.624: INFO: Pod "webserver-deployment-6676bcd6d4-jwbwh" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-jwbwh webserver-deployment-6676bcd6d4- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-6676bcd6d4-jwbwh 97509a36-f89e-497d-96ee-6e6642737483 661258 0 2020-05-01 15:52:03 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5b8f84ec-9804-4eaf-aa4f-ee5573f26111 0xc0046cb100 0xc0046cb101}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 98 56 102 56 52 101 99 45 57 56 48 52 45 52 101 97 102 45 97 97 52 102 45 101 101 53 53 55 51 102 50 54 49 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.624: INFO: Pod "webserver-deployment-6676bcd6d4-m59rb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-m59rb webserver-deployment-6676bcd6d4- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-6676bcd6d4-m59rb 3d073c40-174d-49b2-a5b8-803afd3ef531 661314 0 2020-05-01 15:52:03 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5b8f84ec-9804-4eaf-aa4f-ee5573f26111 0xc0046cb2f0 0xc0046cb2f1}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 98 56 102 56 52 101 99 45 57 56 48 52 45 52 101 97 102 45 97 97 52 102 45 101 101 53 53 55 51 102 50 54 49 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-01 15:52:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.625: INFO: Pod "webserver-deployment-6676bcd6d4-qbfmz" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qbfmz webserver-deployment-6676bcd6d4- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-6676bcd6d4-qbfmz c5b11a98-0e23-4d97-8666-e7380f05407a 661174 0 2020-05-01 15:52:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5b8f84ec-9804-4eaf-aa4f-ee5573f26111 0xc0046cb5b0 0xc0046cb5b1}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 98 56 102 56 52 101 99 45 57 56 48 52 45 52 101 97 102 45 97 97 52 102 45 101 101 53 53 55 51 102 50 54 49 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-01 15:52:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.625: INFO: Pod "webserver-deployment-6676bcd6d4-rbdrp" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rbdrp webserver-deployment-6676bcd6d4- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-6676bcd6d4-rbdrp bf2abc23-a2fe-40e3-9182-d4b4e3a996e0 661197 0 2020-05-01 15:52:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5b8f84ec-9804-4eaf-aa4f-ee5573f26111 0xc0046cb890 0xc0046cb891}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 98 56 102 56 52 101 99 45 57 56 48 52 45 52 101 97 102 45 97 97 52 102 45 101 101 53 53 55 51 102 50 54 49 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-01 15:52:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.626: INFO: Pod "webserver-deployment-6676bcd6d4-s49b6" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-s49b6 webserver-deployment-6676bcd6d4- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-6676bcd6d4-s49b6 6629c94a-4f33-469c-aee7-c50ddd918513 661323 0 2020-05-01 15:52:03 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5b8f84ec-9804-4eaf-aa4f-ee5573f26111 0xc0046cbaf0 0xc0046cbaf1}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 98 56 102 56 52 101 99 45 57 56 48 52 45 52 101 97 102 45 97 97 52 102 45 101 101 53 53 55 51 102 50 54 49 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-01 15:52:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.626: INFO: Pod "webserver-deployment-6676bcd6d4-s92fd" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-s92fd webserver-deployment-6676bcd6d4- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-6676bcd6d4-s92fd 775883cf-8e0a-44b7-b2db-8cac6a8f6bbe 661284 0 2020-05-01 15:52:02 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5b8f84ec-9804-4eaf-aa4f-ee5573f26111 0xc0046cbd30 0xc0046cbd31}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 98 56 102 56 52 101 99 45 57 56 48 52 45 52 101 97 102 45 97 97 52 102 45 101 101 53 53 55 51 102 50 54 49 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:04 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-01 15:52:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.626: INFO: Pod "webserver-deployment-6676bcd6d4-sv4bm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-sv4bm webserver-deployment-6676bcd6d4- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-6676bcd6d4-sv4bm cb340c5c-1b17-442a-8c99-124292bd2c70 661262 0 2020-05-01 15:52:03 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5b8f84ec-9804-4eaf-aa4f-ee5573f26111 0xc0046cbf10 0xc0046cbf11}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 98 56 102 56 52 101 99 45 57 56 48 52 45 52 101 97 102 45 97 97 52 102 45 101 101 53 53 55 51 102 50 54 49 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.627: INFO: Pod "webserver-deployment-6676bcd6d4-t565l" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-t565l webserver-deployment-6676bcd6d4- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-6676bcd6d4-t565l 5beee187-0535-4dd9-a77f-e71d8c0a4fb1 661322 0 2020-05-01 15:52:03 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5b8f84ec-9804-4eaf-aa4f-ee5573f26111 0xc00462c080 0xc00462c081}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 98 56 102 56 52 101 99 45 57 56 48 52 45 52 101 97 102 45 97 97 52 102 45 101 101 53 53 55 51 102 50 54 49 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-01 15:52:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.627: INFO: Pod "webserver-deployment-6676bcd6d4-vhqsp" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-vhqsp webserver-deployment-6676bcd6d4- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-6676bcd6d4-vhqsp cabff860-aec6-480c-9c63-0c89eb419459 661304 0 2020-05-01 15:52:03 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5b8f84ec-9804-4eaf-aa4f-ee5573f26111 0xc00462c310 0xc00462c311}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 98 56 102 56 52 101 99 45 57 56 48 52 45 52 101 97 102 45 97 97 52 102 45 101 101 53 53 55 51 102 50 54 49 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-01 15:52:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.627: INFO: Pod "webserver-deployment-6676bcd6d4-xx6gd" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-xx6gd webserver-deployment-6676bcd6d4- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-6676bcd6d4-xx6gd 83a5468d-c8f0-430a-9147-801012716def 661201 0 2020-05-01 15:52:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5b8f84ec-9804-4eaf-aa4f-ee5573f26111 0xc00462c580 0xc00462c581}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 98 56 102 56 52 101 99 45 57 56 48 52 45 52 101 97 102 45 97 97 52 102 45 101 101 53 53 55 51 102 50 54 49 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:01 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-01 15:52:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.628: INFO: Pod "webserver-deployment-84855cf797-2sgqx" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-2sgqx webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-2sgqx bca53708-a72b-44a6-82bf-d52b3607a83b 661110 0 2020-05-01 15:51:45 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc00462c7e0 0xc00462c7e1}] [] [{kube-controller-manager Update v1 2020-05-01 15:51:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:51:57 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 52 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.244,StartTime:2020-05-01 15:51:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 15:51:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1f31008104b2def1b0e55a0d966e421feeff72e932cd21e4ca283771c0efd78a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.628: INFO: Pod "webserver-deployment-84855cf797-4hkvb" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-4hkvb webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-4hkvb bb281a5d-7bb3-45cd-8ea2-600eba0f5d2b 661266 0 2020-05-01 15:52:03 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc00462caf7 0xc00462caf8}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.628: INFO: Pod "webserver-deployment-84855cf797-7hm5j" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7hm5j webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-7hm5j 0933cc28-0e30-4360-a576-ee104e2ea15c 661274 0 2020-05-01 15:52:02 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc00462cc80 0xc00462cc81}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:04 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-01 15:52:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.628: INFO: Pod "webserver-deployment-84855cf797-csb5v" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-csb5v webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-csb5v 3b1529d4-1b09-40ec-8bfd-09cbb616751d 661283 0 2020-05-01 15:52:03 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc00462ce57 0xc00462ce58}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:04 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-01 15:52:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.629: INFO: Pod "webserver-deployment-84855cf797-dpf7p" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dpf7p webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-dpf7p f83508c0-d890-4962-9596-f2d781a32014 661268 0 2020-05-01 15:52:03 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc00462d047 0xc00462d048}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.629: INFO: Pod "webserver-deployment-84855cf797-f9m7f" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-f9m7f webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-f9m7f a00bdd97-e081-4184-a45b-682ca8e50f17 661265 0 2020-05-01 15:52:03 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc00462d1b0 0xc00462d1b1}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.629: INFO: Pod "webserver-deployment-84855cf797-fl5nt" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fl5nt webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-fl5nt bb7225fd-3dae-4419-a6c4-57771502fb76 661109 0 2020-05-01 15:51:45 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc00462d390 0xc00462d391}] [] [{kube-controller-manager Update v1 2020-05-01 15:51:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:51:57 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 48 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.207,StartTime:2020-05-01 15:51:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 15:51:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://654cca6237aaeedf67da3ca4cf844fb2d3f3074958455dbe02e461db22bdda20,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.207,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.629: INFO: Pod "webserver-deployment-84855cf797-g4qwc" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-g4qwc webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-g4qwc f5632461-b248-4a82-8e77-5addc0c76f5a 661267 0 2020-05-01 15:52:03 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc00462d5c7 0xc00462d5c8}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.630: INFO: Pod "webserver-deployment-84855cf797-hlbq4" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-hlbq4 webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-hlbq4 e3a41bbf-d9a7-408b-a9cd-f520d5c9a246 661293 0 2020-05-01 15:52:02 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc00462d700 0xc00462d701}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-01 15:52:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.630: INFO: Pod "webserver-deployment-84855cf797-jscb7" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jscb7 webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-jscb7 3937f033-dc62-4769-964a-b52f50546e05 661094 0 2020-05-01 15:51:45 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc00462d887 0xc00462d888}] [] [{kube-controller-manager Update v1 2020-05-01 15:51:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:51:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 48 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.206,StartTime:2020-05-01 15:51:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 15:51:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b3b28ec02746f3ec2adc1c60297bdfa0b5182a7d14facada5dde6db190231dc0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.206,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.631: INFO: Pod "webserver-deployment-84855cf797-knnjq" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-knnjq webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-knnjq ca3bdb78-eea3-446f-9fd3-334f8644495e 661145 0 2020-05-01 15:51:45 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc00462da37 0xc00462da38}] [] [{kube-controller-manager Update v1 2020-05-01 15:51:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:51:59 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 49 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.210,StartTime:2020-05-01 15:51:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 15:51:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a0c69502048f54be935659283d0b0cd661b9429d47fe2913a7b9a10dd82132bd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.210,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.631: INFO: Pod "webserver-deployment-84855cf797-kw8cs" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-kw8cs webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-kw8cs e92dd62b-5535-4645-a90b-12f85f833f9b 661102 0 2020-05-01 15:51:45 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc00462dc97 0xc00462dc98}] [] [{kube-controller-manager Update v1 2020-05-01 15:51:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:51:57 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 52 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.243,StartTime:2020-05-01 15:51:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 15:51:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://82d3d06e0a61e70f30932fc6d6f54ac1103814fcdcad6f87ca52aba42cb66f1d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.243,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.631: INFO: Pod "webserver-deployment-84855cf797-lzj9h" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-lzj9h webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-lzj9h 6861f39a-6ff2-4647-a9d9-5c5ce4db527f 661291 0 2020-05-01 15:52:03 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc00462dea7 0xc00462dea8}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-01 15:52:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.631: INFO: Pod "webserver-deployment-84855cf797-mfvds" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-mfvds webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-mfvds 352aa0d0-1f3a-4d3f-9dd0-b59f9447e0cd 661269 0 2020-05-01 15:52:02 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc0045fc0c7 0xc0045fc0c8}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:04 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-01 15:52:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.632: INFO: Pod "webserver-deployment-84855cf797-mpgfh" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-mpgfh webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-mpgfh 0ba2dc2a-d3d4-4ee9-a7e8-3b9b10d2855b 661123 0 2020-05-01 15:51:45 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc0045fc2a7 0xc0045fc2a8}] [] [{kube-controller-manager Update v1 2020-05-01 15:51:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:51:58 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 48 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.209,StartTime:2020-05-01 15:51:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 15:51:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7d37ed296ae25416d7be2d3abbcbca23931c9fca5fb18f6608d7ef74f5158beb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.209,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.632: INFO: Pod "webserver-deployment-84855cf797-pbqx7" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-pbqx7 webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-pbqx7 c0a390e2-7e55-4a46-b46d-e50d67b3ec43 661104 0 2020-05-01 15:51:45 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc0045fc507 0xc0045fc508}] [] [{kube-controller-manager Update v1 2020-05-01 15:51:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:51:57 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 48 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.208,StartTime:2020-05-01 15:51:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 15:51:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4c7cbd9d25aba9003177b47b3912ae70ad5ebd232dba8b9234a053f63b017148,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.208,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.632: INFO: Pod "webserver-deployment-84855cf797-phbvp" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-phbvp webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-phbvp d06d9db7-e7d5-4330-a4b7-5aba03887b72 661072 0 2020-05-01 15:51:45 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc0045fc777 0xc0045fc778}] [] [{kube-controller-manager Update v1 2020-05-01 15:51:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:51:52 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 52 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:51:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.242,StartTime:2020-05-01 15:51:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 15:51:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://99d56b273bf0f9317bef34edd9d29aef18db6bcfa99af33f1c5b49579208c7a2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.242,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.632: INFO: Pod "webserver-deployment-84855cf797-rms69" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rms69 webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-rms69 dee18e98-6853-422a-99e1-442d2a173f6b 661313 0 2020-05-01 15:52:03 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc0045fc9a7 0xc0045fc9a8}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-01 15:52:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.633: INFO: Pod "webserver-deployment-84855cf797-vdtxm" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-vdtxm webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-vdtxm 768b087c-081c-4408-904f-6f4f815b30da 661303 0 2020-05-01 15:52:03 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc0045fcb97 0xc0045fcb98}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 15:52:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-01 15:52:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 1 15:52:06.633: INFO: Pod "webserver-deployment-84855cf797-wh4nb" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wh4nb webserver-deployment-84855cf797- deployment-2283 /api/v1/namespaces/deployment-2283/pods/webserver-deployment-84855cf797-wh4nb 6b99d2cc-5f7f-4532-8e0f-0955a8e50fc2 661263 0 2020-05-01 15:52:03 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 a00e88d2-88b0-4c2f-af10-5efacd9259e9 0xc0045fcd77 0xc0045fcd78}] [] [{kube-controller-manager Update v1 2020-05-01 15:52:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 48 48 101 56 56 100 50 45 56 56 98 48 45 52 99 50 102 45 97 102 49 48 45 53 101 102 97 99 100 57 50 53 57 101 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lddsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lddsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lddsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 15:52:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:52:06.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2283" for this suite. • [SLOW TEST:21.756 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":102,"skipped":1875,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:52:07.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 1 15:52:11.309: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f715e6d9-fa55-49d4-8b9c-36fe3b97f572" in namespace "downward-api-4490" to be "Succeeded or Failed" May 1 15:52:11.554: INFO: Pod "downwardapi-volume-f715e6d9-fa55-49d4-8b9c-36fe3b97f572": Phase="Pending", Reason="", readiness=false. Elapsed: 244.999962ms May 1 15:52:13.718: INFO: Pod "downwardapi-volume-f715e6d9-fa55-49d4-8b9c-36fe3b97f572": Phase="Pending", Reason="", readiness=false. Elapsed: 2.408917099s May 1 15:52:16.219: INFO: Pod "downwardapi-volume-f715e6d9-fa55-49d4-8b9c-36fe3b97f572": Phase="Pending", Reason="", readiness=false. Elapsed: 4.909844077s May 1 15:52:18.346: INFO: Pod "downwardapi-volume-f715e6d9-fa55-49d4-8b9c-36fe3b97f572": Phase="Pending", Reason="", readiness=false. Elapsed: 7.036961757s May 1 15:52:20.419: INFO: Pod "downwardapi-volume-f715e6d9-fa55-49d4-8b9c-36fe3b97f572": Phase="Pending", Reason="", readiness=false. Elapsed: 9.109972867s May 1 15:52:22.450: INFO: Pod "downwardapi-volume-f715e6d9-fa55-49d4-8b9c-36fe3b97f572": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.141452119s STEP: Saw pod success May 1 15:52:22.451: INFO: Pod "downwardapi-volume-f715e6d9-fa55-49d4-8b9c-36fe3b97f572" satisfied condition "Succeeded or Failed" May 1 15:52:22.455: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-f715e6d9-fa55-49d4-8b9c-36fe3b97f572 container client-container: STEP: delete the pod May 1 15:52:22.904: INFO: Waiting for pod downwardapi-volume-f715e6d9-fa55-49d4-8b9c-36fe3b97f572 to disappear May 1 15:52:22.915: INFO: Pod downwardapi-volume-f715e6d9-fa55-49d4-8b9c-36fe3b97f572 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:52:22.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4490" for this suite. • [SLOW TEST:15.555 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1900,"failed":0} [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:52:22.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:52:23.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 1 15:52:24.081: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-01T15:52:23Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-01T15:52:23Z]] name:name1 resourceVersion:661522 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3c31967b-fa7d-48ef-a16d-ef7efd16cd33] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 1 15:52:34.087: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-01T15:52:34Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-01T15:52:34Z]] name:name2 resourceVersion:661676 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d85f8bde-40d0-44cc-a03c-792a5abaa046] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 1 15:52:44.458: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-01T15:52:23Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-01T15:52:44Z]] name:name1 resourceVersion:661707 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3c31967b-fa7d-48ef-a16d-ef7efd16cd33] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 1 15:52:54.471: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-01T15:52:34Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-01T15:52:54Z]] name:name2 resourceVersion:661738 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d85f8bde-40d0-44cc-a03c-792a5abaa046] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 1 15:53:04.484: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-01T15:52:23Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-01T15:52:44Z]] name:name1 resourceVersion:661766 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:3c31967b-fa7d-48ef-a16d-ef7efd16cd33] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 1 15:53:14.492: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-01T15:52:34Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-01T15:52:54Z]] name:name2 resourceVersion:661796 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:d85f8bde-40d0-44cc-a03c-792a5abaa046] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:53:25.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-849" for this suite. • [SLOW TEST:62.048 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":104,"skipped":1900,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:53:25.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 1 15:53:35.221: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 15:53:35.244: INFO: Pod pod-with-prestop-http-hook still exists May 1 15:53:37.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 15:53:37.273: INFO: Pod pod-with-prestop-http-hook still exists May 1 15:53:39.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 15:53:39.248: INFO: Pod pod-with-prestop-http-hook still exists May 1 15:53:41.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 15:53:41.249: INFO: Pod pod-with-prestop-http-hook still exists May 1 15:53:43.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 15:53:43.248: INFO: Pod pod-with-prestop-http-hook still exists May 1 15:53:45.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 1 15:53:45.707: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:53:46.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2305" for this suite. • [SLOW TEST:21.050 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1930,"failed":0} [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:53:46.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod May 1 15:53:46.405: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:53:56.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4474" for this suite. • [SLOW TEST:10.574 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":106,"skipped":1930,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:53:56.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 1 15:53:57.467: INFO: Waiting up to 5m0s for pod "downwardapi-volume-576aab77-5078-4c6b-ab46-2681a7b68e69" in namespace "projected-4675" to be "Succeeded or Failed" May 1 15:53:57.631: INFO: Pod "downwardapi-volume-576aab77-5078-4c6b-ab46-2681a7b68e69": Phase="Pending", Reason="", readiness=false. Elapsed: 164.529524ms May 1 15:53:59.814: INFO: Pod "downwardapi-volume-576aab77-5078-4c6b-ab46-2681a7b68e69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.34711893s May 1 15:54:01.871: INFO: Pod "downwardapi-volume-576aab77-5078-4c6b-ab46-2681a7b68e69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.404424987s May 1 15:54:04.801: INFO: Pod "downwardapi-volume-576aab77-5078-4c6b-ab46-2681a7b68e69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.334042245s STEP: Saw pod success May 1 15:54:04.801: INFO: Pod "downwardapi-volume-576aab77-5078-4c6b-ab46-2681a7b68e69" satisfied condition "Succeeded or Failed" May 1 15:54:04.804: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-576aab77-5078-4c6b-ab46-2681a7b68e69 container client-container: STEP: delete the pod May 1 15:54:05.720: INFO: Waiting for pod downwardapi-volume-576aab77-5078-4c6b-ab46-2681a7b68e69 to disappear May 1 15:54:06.070: INFO: Pod downwardapi-volume-576aab77-5078-4c6b-ab46-2681a7b68e69 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:54:06.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4675" for this suite. • [SLOW TEST:9.434 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1962,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:54:06.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:54:06.479: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-dae7368b-db1a-4b66-a8ff-f6b53152ffc8" in namespace "security-context-test-8875" to be "Succeeded or Failed" May 1 15:54:06.823: INFO: Pod "busybox-readonly-false-dae7368b-db1a-4b66-a8ff-f6b53152ffc8": Phase="Pending", Reason="", readiness=false. Elapsed: 344.146354ms May 1 15:54:08.828: INFO: Pod "busybox-readonly-false-dae7368b-db1a-4b66-a8ff-f6b53152ffc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348364507s May 1 15:54:10.832: INFO: Pod "busybox-readonly-false-dae7368b-db1a-4b66-a8ff-f6b53152ffc8": Phase="Running", Reason="", readiness=true. Elapsed: 4.35275203s May 1 15:54:12.836: INFO: Pod "busybox-readonly-false-dae7368b-db1a-4b66-a8ff-f6b53152ffc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.357144735s May 1 15:54:12.837: INFO: Pod "busybox-readonly-false-dae7368b-db1a-4b66-a8ff-f6b53152ffc8" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:54:12.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8875" for this suite. • [SLOW TEST:6.766 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":1970,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:54:12.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4290 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4290 I0501 15:54:13.006861 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4290, replica count: 2 I0501 15:54:16.057534 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 15:54:19.057769 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 15:54:22.058109 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 15:54:25.058348 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 1 15:54:25.058: INFO: Creating new exec pod May 1 15:54:34.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4290 execpodkkzsn -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 1 15:54:34.391: INFO: stderr: "I0501 15:54:34.262286 2336 log.go:172] (0xc0003e42c0) (0xc00065edc0) Create stream\nI0501 15:54:34.262370 2336 log.go:172] (0xc0003e42c0) (0xc00065edc0) Stream added, broadcasting: 1\nI0501 15:54:34.263965 2336 log.go:172] (0xc0003e42c0) Reply frame received for 1\nI0501 15:54:34.264018 2336 log.go:172] (0xc0003e42c0) (0xc00070e1e0) Create stream\nI0501 15:54:34.264035 2336 log.go:172] (0xc0003e42c0) (0xc00070e1e0) Stream added, broadcasting: 3\nI0501 15:54:34.264842 2336 log.go:172] (0xc0003e42c0) Reply frame received for 3\nI0501 15:54:34.264872 2336 log.go:172] (0xc0003e42c0) (0xc00070e280) Create stream\nI0501 15:54:34.264878 2336 log.go:172] (0xc0003e42c0) (0xc00070e280) Stream added, broadcasting: 5\nI0501 15:54:34.265913 2336 log.go:172] (0xc0003e42c0) Reply frame received for 5\nI0501 15:54:34.352506 2336 log.go:172] (0xc0003e42c0) Data frame received for 5\nI0501 15:54:34.352536 2336 log.go:172] (0xc00070e280) (5) Data frame handling\nI0501 15:54:34.352555 2336 log.go:172] (0xc00070e280) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0501 15:54:34.382095 2336 log.go:172] (0xc0003e42c0) Data frame received for 5\nI0501 15:54:34.382138 2336 log.go:172] (0xc00070e280) (5) Data frame handling\nI0501 15:54:34.382172 2336 log.go:172] (0xc00070e280) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0501 15:54:34.382625 2336 log.go:172] (0xc0003e42c0) Data frame received for 5\nI0501 15:54:34.382748 2336 log.go:172] (0xc0003e42c0) Data frame received for 3\nI0501 15:54:34.382792 2336 log.go:172] (0xc00070e1e0) (3) Data frame handling\nI0501 15:54:34.382813 2336 log.go:172] (0xc00070e280) (5) Data frame handling\nI0501 15:54:34.384613 2336 log.go:172] (0xc0003e42c0) Data frame received for 1\nI0501 15:54:34.384646 2336 log.go:172] (0xc00065edc0) (1) Data frame handling\nI0501 15:54:34.384686 2336 log.go:172] (0xc00065edc0) (1) Data frame sent\nI0501 15:54:34.384719 2336 log.go:172] (0xc0003e42c0) (0xc00065edc0) Stream removed, broadcasting: 1\nI0501 15:54:34.384743 2336 log.go:172] (0xc0003e42c0) Go away received\nI0501 15:54:34.385612 2336 log.go:172] (0xc0003e42c0) (0xc00065edc0) Stream removed, broadcasting: 1\nI0501 15:54:34.385651 2336 log.go:172] (0xc0003e42c0) (0xc00070e1e0) Stream removed, broadcasting: 3\nI0501 15:54:34.385668 2336 log.go:172] (0xc0003e42c0) (0xc00070e280) Stream removed, broadcasting: 5\n" May 1 15:54:34.391: INFO: stdout: "" May 1 15:54:34.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4290 execpodkkzsn -- /bin/sh -x -c nc -zv -t -w 2 10.99.91.145 80' May 1 15:54:34.586: INFO: stderr: "I0501 15:54:34.515459 2357 log.go:172] (0xc0007e22c0) (0xc00080a140) Create stream\nI0501 15:54:34.515531 2357 log.go:172] (0xc0007e22c0) (0xc00080a140) Stream added, broadcasting: 1\nI0501 15:54:34.518836 2357 log.go:172] (0xc0007e22c0) Reply frame received for 1\nI0501 15:54:34.518875 2357 log.go:172] (0xc0007e22c0) (0xc00080a1e0) Create stream\nI0501 15:54:34.518884 2357 log.go:172] (0xc0007e22c0) (0xc00080a1e0) Stream added, broadcasting: 3\nI0501 15:54:34.519747 2357 log.go:172] (0xc0007e22c0) Reply frame received for 3\nI0501 15:54:34.519786 2357 log.go:172] (0xc0007e22c0) (0xc0002ee000) Create stream\nI0501 15:54:34.519797 2357 log.go:172] (0xc0007e22c0) (0xc0002ee000) Stream added, broadcasting: 5\nI0501 15:54:34.520632 2357 log.go:172] (0xc0007e22c0) Reply frame received for 5\nI0501 15:54:34.580224 2357 log.go:172] (0xc0007e22c0) Data frame received for 5\nI0501 15:54:34.580284 2357 log.go:172] (0xc0002ee000) (5) Data frame handling\nI0501 15:54:34.580304 2357 log.go:172] (0xc0002ee000) (5) Data frame sent\nI0501 15:54:34.580318 2357 log.go:172] (0xc0007e22c0) Data frame received for 5\n+ nc -zv -t -w 2 10.99.91.145 80\nConnection to 10.99.91.145 80 port [tcp/http] succeeded!\nI0501 15:54:34.580330 2357 log.go:172] (0xc0002ee000) (5) Data frame handling\nI0501 15:54:34.580389 2357 log.go:172] (0xc0007e22c0) Data frame received for 3\nI0501 15:54:34.580419 2357 log.go:172] (0xc00080a1e0) (3) Data frame handling\nI0501 15:54:34.581934 2357 log.go:172] (0xc0007e22c0) Data frame received for 1\nI0501 15:54:34.581954 2357 log.go:172] (0xc00080a140) (1) Data frame handling\nI0501 15:54:34.581966 2357 log.go:172] (0xc00080a140) (1) Data frame sent\nI0501 15:54:34.581981 2357 log.go:172] (0xc0007e22c0) (0xc00080a140) Stream removed, broadcasting: 1\nI0501 15:54:34.582024 2357 log.go:172] (0xc0007e22c0) Go away received\nI0501 15:54:34.582346 2357 log.go:172] (0xc0007e22c0) (0xc00080a140) Stream removed, broadcasting: 1\nI0501 15:54:34.582362 2357 log.go:172] (0xc0007e22c0) (0xc00080a1e0) Stream removed, broadcasting: 3\nI0501 15:54:34.582373 2357 log.go:172] (0xc0007e22c0) (0xc0002ee000) Stream removed, broadcasting: 5\n" May 1 15:54:34.586: INFO: stdout: "" May 1 15:54:34.586: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:54:35.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4290" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:22.558 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":109,"skipped":1982,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:54:35.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:54:50.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-207" for this suite. STEP: Destroying namespace "nsdeletetest-2800" for this suite. May 1 15:54:50.711: INFO: Namespace nsdeletetest-2800 was already deleted STEP: Destroying namespace "nsdeletetest-6840" for this suite. • [SLOW TEST:15.311 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":110,"skipped":1984,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:54:50.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:55:04.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1340" for this suite. • [SLOW TEST:14.173 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":111,"skipped":1997,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:55:04.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:55:14.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1594" for this suite. • [SLOW TEST:10.039 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1999,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:55:14.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-9147/configmap-test-f26e7dd4-843b-4034-9e6a-b5ca90993de4 STEP: Creating a pod to test consume configMaps May 1 15:55:15.453: INFO: Waiting up to 5m0s for pod "pod-configmaps-1878db46-41d6-4f92-b289-440f33accaf3" in namespace "configmap-9147" to be "Succeeded or Failed" May 1 15:55:15.456: INFO: Pod "pod-configmaps-1878db46-41d6-4f92-b289-440f33accaf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.721912ms May 1 15:55:17.480: INFO: Pod "pod-configmaps-1878db46-41d6-4f92-b289-440f33accaf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027204969s May 1 15:55:19.633: INFO: Pod "pod-configmaps-1878db46-41d6-4f92-b289-440f33accaf3": Phase="Running", Reason="", readiness=true. Elapsed: 4.179673208s May 1 15:55:22.004: INFO: Pod "pod-configmaps-1878db46-41d6-4f92-b289-440f33accaf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.550832944s STEP: Saw pod success May 1 15:55:22.004: INFO: Pod "pod-configmaps-1878db46-41d6-4f92-b289-440f33accaf3" satisfied condition "Succeeded or Failed" May 1 15:55:22.006: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-1878db46-41d6-4f92-b289-440f33accaf3 container env-test: STEP: delete the pod May 1 15:55:23.300: INFO: Waiting for pod pod-configmaps-1878db46-41d6-4f92-b289-440f33accaf3 to disappear May 1 15:55:23.585: INFO: Pod pod-configmaps-1878db46-41d6-4f92-b289-440f33accaf3 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:55:23.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9147" for this suite. • [SLOW TEST:8.668 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":2015,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:55:23.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info May 1 15:55:24.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config cluster-info' May 1 15:55:25.222: INFO: stderr: "" May 1 15:55:25.222: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:55:25.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6851" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":114,"skipped":2017,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:55:25.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 1 15:55:26.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3121' May 1 15:55:26.732: INFO: stderr: "" May 1 15:55:26.732: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 1 15:55:36.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3121 -o json' May 1 15:55:36.882: INFO: stderr: "" May 1 15:55:36.882: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-01T15:55:26Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-01T15:55:26Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.229\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-01T15:55:32Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3121\",\n \"resourceVersion\": \"662488\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3121/pods/e2e-test-httpd-pod\",\n \"uid\": \"a10024e0-465e-4775-b1db-d57286c034ee\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-rwhsj\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-rwhsj\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-rwhsj\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-01T15:55:27Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-01T15:55:32Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-01T15:55:32Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-01T15:55:26Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://7dcf883fc27974d83cdb2061fe750a33b3ea8f607c2304d880d53e12d84b722a\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-01T15:55:31Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.18\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.229\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.229\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-01T15:55:27Z\"\n }\n}\n" STEP: replace the image in the pod May 1 15:55:36.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3121' May 1 15:55:38.021: INFO: stderr: "" May 1 15:55:38.021: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 1 15:55:38.031: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3121' May 1 15:55:53.525: INFO: stderr: "" May 1 15:55:53.525: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:55:53.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3121" for this suite. • [SLOW TEST:28.346 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":115,"skipped":2035,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:55:53.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server May 1 15:55:53.847: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:55:53.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7483" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":116,"skipped":2058,"failed":0} S ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:55:53.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 1 15:55:54.168: INFO: Pod name pod-release: Found 0 pods out of 1 May 1 15:55:59.178: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:55:59.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-36" for this suite. • [SLOW TEST:5.787 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":117,"skipped":2059,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:55:59.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-f1ab20b3-1e2c-401c-aa90-26fa1bdb8ec2 STEP: Creating a pod to test consume configMaps May 1 15:56:00.570: INFO: Waiting up to 5m0s for pod "pod-configmaps-86877abc-ed65-42e1-8a5f-97b9102c6814" in namespace "configmap-3444" to be "Succeeded or Failed" May 1 15:56:00.663: INFO: Pod "pod-configmaps-86877abc-ed65-42e1-8a5f-97b9102c6814": Phase="Pending", Reason="", readiness=false. Elapsed: 92.450916ms May 1 15:56:02.716: INFO: Pod "pod-configmaps-86877abc-ed65-42e1-8a5f-97b9102c6814": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146118621s May 1 15:56:04.720: INFO: Pod "pod-configmaps-86877abc-ed65-42e1-8a5f-97b9102c6814": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.149669166s STEP: Saw pod success May 1 15:56:04.720: INFO: Pod "pod-configmaps-86877abc-ed65-42e1-8a5f-97b9102c6814" satisfied condition "Succeeded or Failed" May 1 15:56:04.723: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-86877abc-ed65-42e1-8a5f-97b9102c6814 container configmap-volume-test: STEP: delete the pod May 1 15:56:04.867: INFO: Waiting for pod pod-configmaps-86877abc-ed65-42e1-8a5f-97b9102c6814 to disappear May 1 15:56:04.882: INFO: Pod pod-configmaps-86877abc-ed65-42e1-8a5f-97b9102c6814 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:56:04.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3444" for this suite. • [SLOW TEST:5.240 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":2075,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:56:04.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8345.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8345.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8345.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8345.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 15:56:17.449: INFO: DNS probes using dns-test-12b893c2-5837-431c-bda9-89e99631d56c succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8345.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8345.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8345.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8345.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 15:56:29.136: INFO: File wheezy_udp@dns-test-service-3.dns-8345.svc.cluster.local from pod dns-8345/dns-test-3f3d5125-c8c0-41a4-900d-2fcf1e36d353 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 15:56:29.140: INFO: File jessie_udp@dns-test-service-3.dns-8345.svc.cluster.local from pod dns-8345/dns-test-3f3d5125-c8c0-41a4-900d-2fcf1e36d353 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 15:56:29.140: INFO: Lookups using dns-8345/dns-test-3f3d5125-c8c0-41a4-900d-2fcf1e36d353 failed for: [wheezy_udp@dns-test-service-3.dns-8345.svc.cluster.local jessie_udp@dns-test-service-3.dns-8345.svc.cluster.local] May 1 15:56:34.155: INFO: File wheezy_udp@dns-test-service-3.dns-8345.svc.cluster.local from pod dns-8345/dns-test-3f3d5125-c8c0-41a4-900d-2fcf1e36d353 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 15:56:34.158: INFO: File jessie_udp@dns-test-service-3.dns-8345.svc.cluster.local from pod dns-8345/dns-test-3f3d5125-c8c0-41a4-900d-2fcf1e36d353 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 15:56:34.158: INFO: Lookups using dns-8345/dns-test-3f3d5125-c8c0-41a4-900d-2fcf1e36d353 failed for: [wheezy_udp@dns-test-service-3.dns-8345.svc.cluster.local jessie_udp@dns-test-service-3.dns-8345.svc.cluster.local] May 1 15:56:39.146: INFO: File wheezy_udp@dns-test-service-3.dns-8345.svc.cluster.local from pod dns-8345/dns-test-3f3d5125-c8c0-41a4-900d-2fcf1e36d353 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 15:56:39.150: INFO: File jessie_udp@dns-test-service-3.dns-8345.svc.cluster.local from pod dns-8345/dns-test-3f3d5125-c8c0-41a4-900d-2fcf1e36d353 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 15:56:39.150: INFO: Lookups using dns-8345/dns-test-3f3d5125-c8c0-41a4-900d-2fcf1e36d353 failed for: [wheezy_udp@dns-test-service-3.dns-8345.svc.cluster.local jessie_udp@dns-test-service-3.dns-8345.svc.cluster.local] May 1 15:56:44.152: INFO: File wheezy_udp@dns-test-service-3.dns-8345.svc.cluster.local from pod dns-8345/dns-test-3f3d5125-c8c0-41a4-900d-2fcf1e36d353 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 15:56:44.155: INFO: File jessie_udp@dns-test-service-3.dns-8345.svc.cluster.local from pod dns-8345/dns-test-3f3d5125-c8c0-41a4-900d-2fcf1e36d353 contains 'foo.example.com. ' instead of 'bar.example.com.' May 1 15:56:44.155: INFO: Lookups using dns-8345/dns-test-3f3d5125-c8c0-41a4-900d-2fcf1e36d353 failed for: [wheezy_udp@dns-test-service-3.dns-8345.svc.cluster.local jessie_udp@dns-test-service-3.dns-8345.svc.cluster.local] May 1 15:56:49.150: INFO: DNS probes using dns-test-3f3d5125-c8c0-41a4-900d-2fcf1e36d353 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8345.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8345.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8345.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8345.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 15:57:01.853: INFO: DNS probes using dns-test-4a14fc08-ee35-4390-bf14-80292d0d7755 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:57:02.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8345" for this suite. • [SLOW TEST:57.285 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":119,"skipped":2080,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:57:02.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0501 15:57:43.681971 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 15:57:43.682: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:57:43.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5055" for this suite. • [SLOW TEST:41.436 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":120,"skipped":2108,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:57:43.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 15:57:44.272: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 15:57:46.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945464, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945464, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945464, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945464, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 15:57:49.836: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:57:49.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9470" for this suite. STEP: Destroying namespace "webhook-9470-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.369 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":121,"skipped":2109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:57:50.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:58:26.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1122" for this suite. STEP: Destroying namespace "nsdeletetest-8285" for this suite. May 1 15:58:26.094: INFO: Namespace nsdeletetest-8285 was already deleted STEP: Destroying namespace "nsdeletetest-7915" for this suite. • [SLOW TEST:36.045 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":122,"skipped":2144,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:58:26.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 15:58:26.776: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 15:58:28.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945506, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945506, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945507, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945506, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:58:30.822: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945506, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945506, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945507, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945506, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 15:58:33.855: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:58:33.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6722" for this suite. STEP: Destroying namespace "webhook-6722-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.975 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":123,"skipped":2147,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:58:34.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 1 15:58:34.170: INFO: Waiting up to 5m0s for pod "downward-api-f92818ea-add0-4f04-a253-585046c7fa83" in namespace "downward-api-500" to be "Succeeded or Failed" May 1 15:58:34.215: INFO: Pod "downward-api-f92818ea-add0-4f04-a253-585046c7fa83": Phase="Pending", Reason="", readiness=false. Elapsed: 44.674687ms May 1 15:58:36.293: INFO: Pod "downward-api-f92818ea-add0-4f04-a253-585046c7fa83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122721367s May 1 15:58:38.297: INFO: Pod "downward-api-f92818ea-add0-4f04-a253-585046c7fa83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12728874s STEP: Saw pod success May 1 15:58:38.298: INFO: Pod "downward-api-f92818ea-add0-4f04-a253-585046c7fa83" satisfied condition "Succeeded or Failed" May 1 15:58:38.301: INFO: Trying to get logs from node kali-worker pod downward-api-f92818ea-add0-4f04-a253-585046c7fa83 container dapi-container: STEP: delete the pod May 1 15:58:38.363: INFO: Waiting for pod downward-api-f92818ea-add0-4f04-a253-585046c7fa83 to disappear May 1 15:58:38.370: INFO: Pod downward-api-f92818ea-add0-4f04-a253-585046c7fa83 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:58:38.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-500" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2159,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:58:38.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container May 1 15:58:43.060: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6810 pod-service-account-059d5431-153f-4cd8-91a3-0d0330be7b60 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 1 15:58:43.279: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6810 pod-service-account-059d5431-153f-4cd8-91a3-0d0330be7b60 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 1 15:58:43.490: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6810 pod-service-account-059d5431-153f-4cd8-91a3-0d0330be7b60 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:58:43.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6810" for this suite. • [SLOW TEST:5.478 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":125,"skipped":2172,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:58:43.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:58:44.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4032" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":2177,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:58:44.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:58:44.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config version' May 1 15:58:44.340: INFO: stderr: "" May 1 15:58:44.340: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T14:47:14Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:58:44.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3329" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":127,"skipped":2195,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:58:44.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 15:58:45.353: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 15:58:47.509: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945525, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945525, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945525, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945525, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 15:58:50.611: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 1 15:58:54.746: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config attach --namespace=webhook-2507 to-be-attached-pod -i -c=container1' May 1 15:58:54.859: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:58:54.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2507" for this suite. STEP: Destroying namespace "webhook-2507-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.732 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":128,"skipped":2196,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:58:55.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1638.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1638.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1638.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1638.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1638.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1638.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1638.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1638.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1638.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1638.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1638.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 190.98.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.98.190_udp@PTR;check="$$(dig +tcp +noall +answer +search 190.98.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.98.190_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1638.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1638.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1638.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1638.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1638.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1638.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1638.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1638.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1638.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1638.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1638.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 190.98.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.98.190_udp@PTR;check="$$(dig +tcp +noall +answer +search 190.98.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.98.190_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 15:59:05.786: INFO: Unable to read wheezy_udp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:05.895: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:06.103: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:06.107: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:06.127: INFO: Unable to read jessie_udp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:06.130: INFO: Unable to read jessie_tcp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:06.133: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:06.136: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:06.157: INFO: Lookups using dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d failed for: [wheezy_udp@dns-test-service.dns-1638.svc.cluster.local wheezy_tcp@dns-test-service.dns-1638.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local jessie_udp@dns-test-service.dns-1638.svc.cluster.local jessie_tcp@dns-test-service.dns-1638.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local] May 1 15:59:11.162: INFO: Unable to read wheezy_udp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:11.166: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:11.169: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:11.172: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:11.667: INFO: Unable to read jessie_udp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:11.670: INFO: Unable to read jessie_tcp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:11.672: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:11.675: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:11.692: INFO: Lookups using dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d failed for: [wheezy_udp@dns-test-service.dns-1638.svc.cluster.local wheezy_tcp@dns-test-service.dns-1638.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local jessie_udp@dns-test-service.dns-1638.svc.cluster.local jessie_tcp@dns-test-service.dns-1638.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local] May 1 15:59:16.161: INFO: Unable to read wheezy_udp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:16.165: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:16.168: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:16.171: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:16.339: INFO: Unable to read jessie_udp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:16.342: INFO: Unable to read jessie_tcp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:16.345: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:16.348: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:16.364: INFO: Lookups using dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d failed for: [wheezy_udp@dns-test-service.dns-1638.svc.cluster.local wheezy_tcp@dns-test-service.dns-1638.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local jessie_udp@dns-test-service.dns-1638.svc.cluster.local jessie_tcp@dns-test-service.dns-1638.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local] May 1 15:59:21.163: INFO: Unable to read wheezy_udp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:21.167: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:21.171: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:21.174: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:21.193: INFO: Unable to read jessie_udp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:21.195: INFO: Unable to read jessie_tcp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:21.198: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:21.201: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:21.216: INFO: Lookups using dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d failed for: [wheezy_udp@dns-test-service.dns-1638.svc.cluster.local wheezy_tcp@dns-test-service.dns-1638.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local jessie_udp@dns-test-service.dns-1638.svc.cluster.local jessie_tcp@dns-test-service.dns-1638.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local] May 1 15:59:26.161: INFO: Unable to read wheezy_udp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:26.164: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:26.167: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:26.170: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:26.189: INFO: Unable to read jessie_udp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:26.192: INFO: Unable to read jessie_tcp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:26.195: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:26.198: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:26.212: INFO: Lookups using dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d failed for: [wheezy_udp@dns-test-service.dns-1638.svc.cluster.local wheezy_tcp@dns-test-service.dns-1638.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local jessie_udp@dns-test-service.dns-1638.svc.cluster.local jessie_tcp@dns-test-service.dns-1638.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local] May 1 15:59:31.180: INFO: Unable to read wheezy_udp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:31.184: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:31.188: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:31.191: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:31.588: INFO: Unable to read jessie_udp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:31.591: INFO: Unable to read jessie_tcp@dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:31.593: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:31.595: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local from pod dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d: the server could not find the requested resource (get pods dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d) May 1 15:59:31.607: INFO: Lookups using dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d failed for: [wheezy_udp@dns-test-service.dns-1638.svc.cluster.local wheezy_tcp@dns-test-service.dns-1638.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local jessie_udp@dns-test-service.dns-1638.svc.cluster.local jessie_tcp@dns-test-service.dns-1638.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1638.svc.cluster.local] May 1 15:59:36.401: INFO: DNS probes using dns-1638/dns-test-72b8f12c-b8c3-4196-b79b-4c944d86835d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:59:37.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1638" for this suite. • [SLOW TEST:42.532 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":129,"skipped":2236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:59:37.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 15:59:38.875: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 15:59:40.884: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945579, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945579, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945579, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945578, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 15:59:42.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945579, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945579, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945579, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945578, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 15:59:45.950: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 15:59:45.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5174-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:59:48.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1007" for this suite. STEP: Destroying namespace "webhook-1007-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.514 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":130,"skipped":2300,"failed":0} SSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:59:50.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:59:50.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1533" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":131,"skipped":2304,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:59:50.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 1 15:59:56.271: INFO: Successfully updated pod "pod-update-activedeadlineseconds-cedd2b55-d0e6-45b7-b957-c4b3fe76ddac" May 1 15:59:56.271: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-cedd2b55-d0e6-45b7-b957-c4b3fe76ddac" in namespace "pods-8883" to be "terminated due to deadline exceeded" May 1 15:59:57.041: INFO: Pod "pod-update-activedeadlineseconds-cedd2b55-d0e6-45b7-b957-c4b3fe76ddac": Phase="Running", Reason="", readiness=true. Elapsed: 769.763189ms May 1 15:59:59.046: INFO: Pod "pod-update-activedeadlineseconds-cedd2b55-d0e6-45b7-b957-c4b3fe76ddac": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.774827047s May 1 15:59:59.046: INFO: Pod "pod-update-activedeadlineseconds-cedd2b55-d0e6-45b7-b957-c4b3fe76ddac" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 15:59:59.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8883" for this suite. • [SLOW TEST:8.211 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2352,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 15:59:59.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-h2rf STEP: Creating a pod to test atomic-volume-subpath May 1 15:59:59.232: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-h2rf" in namespace "subpath-4047" to be "Succeeded or Failed" May 1 15:59:59.236: INFO: Pod "pod-subpath-test-secret-h2rf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337615ms May 1 16:00:01.626: INFO: Pod "pod-subpath-test-secret-h2rf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.393926399s May 1 16:00:03.630: INFO: Pod "pod-subpath-test-secret-h2rf": Phase="Running", Reason="", readiness=true. Elapsed: 4.398107904s May 1 16:00:05.693: INFO: Pod "pod-subpath-test-secret-h2rf": Phase="Running", Reason="", readiness=true. Elapsed: 6.461069153s May 1 16:00:07.697: INFO: Pod "pod-subpath-test-secret-h2rf": Phase="Running", Reason="", readiness=true. Elapsed: 8.464406836s May 1 16:00:09.983: INFO: Pod "pod-subpath-test-secret-h2rf": Phase="Running", Reason="", readiness=true. Elapsed: 10.750606917s May 1 16:00:11.988: INFO: Pod "pod-subpath-test-secret-h2rf": Phase="Running", Reason="", readiness=true. Elapsed: 12.755474272s May 1 16:00:13.991: INFO: Pod "pod-subpath-test-secret-h2rf": Phase="Running", Reason="", readiness=true. Elapsed: 14.758906515s May 1 16:00:16.043: INFO: Pod "pod-subpath-test-secret-h2rf": Phase="Running", Reason="", readiness=true. Elapsed: 16.810440797s May 1 16:00:18.047: INFO: Pod "pod-subpath-test-secret-h2rf": Phase="Running", Reason="", readiness=true. Elapsed: 18.814418076s May 1 16:00:20.051: INFO: Pod "pod-subpath-test-secret-h2rf": Phase="Running", Reason="", readiness=true. Elapsed: 20.818978266s May 1 16:00:22.055: INFO: Pod "pod-subpath-test-secret-h2rf": Phase="Running", Reason="", readiness=true. Elapsed: 22.822557848s May 1 16:00:24.059: INFO: Pod "pod-subpath-test-secret-h2rf": Phase="Running", Reason="", readiness=true. Elapsed: 24.826637291s May 1 16:00:26.063: INFO: Pod "pod-subpath-test-secret-h2rf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.831196689s STEP: Saw pod success May 1 16:00:26.063: INFO: Pod "pod-subpath-test-secret-h2rf" satisfied condition "Succeeded or Failed" May 1 16:00:26.067: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-secret-h2rf container test-container-subpath-secret-h2rf: STEP: delete the pod May 1 16:00:26.229: INFO: Waiting for pod pod-subpath-test-secret-h2rf to disappear May 1 16:00:26.274: INFO: Pod pod-subpath-test-secret-h2rf no longer exists STEP: Deleting pod pod-subpath-test-secret-h2rf May 1 16:00:26.274: INFO: Deleting pod "pod-subpath-test-secret-h2rf" in namespace "subpath-4047" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:00:26.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4047" for this suite. • [SLOW TEST:27.226 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":133,"skipped":2364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:00:26.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 1 16:00:26.567: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9962 /api/v1/namespaces/watch-9962/configmaps/e2e-watch-test-configmap-a 6f1ebbae-c3ef-4f1e-baf2-b3dd78c5ec5e 664236 0 2020-05-01 16:00:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-01 16:00:26 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 1 16:00:26.567: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9962 /api/v1/namespaces/watch-9962/configmaps/e2e-watch-test-configmap-a 6f1ebbae-c3ef-4f1e-baf2-b3dd78c5ec5e 664236 0 2020-05-01 16:00:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-01 16:00:26 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 1 16:00:36.575: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9962 /api/v1/namespaces/watch-9962/configmaps/e2e-watch-test-configmap-a 6f1ebbae-c3ef-4f1e-baf2-b3dd78c5ec5e 664276 0 2020-05-01 16:00:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-01 16:00:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 1 16:00:36.576: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9962 /api/v1/namespaces/watch-9962/configmaps/e2e-watch-test-configmap-a 6f1ebbae-c3ef-4f1e-baf2-b3dd78c5ec5e 664276 0 2020-05-01 16:00:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-01 16:00:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 1 16:00:46.583: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9962 /api/v1/namespaces/watch-9962/configmaps/e2e-watch-test-configmap-a 6f1ebbae-c3ef-4f1e-baf2-b3dd78c5ec5e 664304 0 2020-05-01 16:00:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-01 16:00:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 1 16:00:46.584: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9962 /api/v1/namespaces/watch-9962/configmaps/e2e-watch-test-configmap-a 6f1ebbae-c3ef-4f1e-baf2-b3dd78c5ec5e 664304 0 2020-05-01 16:00:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-01 16:00:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 1 16:00:56.749: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9962 /api/v1/namespaces/watch-9962/configmaps/e2e-watch-test-configmap-a 6f1ebbae-c3ef-4f1e-baf2-b3dd78c5ec5e 664333 0 2020-05-01 16:00:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-01 16:00:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 1 16:00:56.749: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9962 /api/v1/namespaces/watch-9962/configmaps/e2e-watch-test-configmap-a 6f1ebbae-c3ef-4f1e-baf2-b3dd78c5ec5e 664333 0 2020-05-01 16:00:26 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-01 16:00:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 1 16:01:06.756: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9962 /api/v1/namespaces/watch-9962/configmaps/e2e-watch-test-configmap-b 23b53d95-319e-4ed4-a0cd-f12ea6b8b29a 664361 0 2020-05-01 16:01:06 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-01 16:01:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 1 16:01:06.757: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9962 /api/v1/namespaces/watch-9962/configmaps/e2e-watch-test-configmap-b 23b53d95-319e-4ed4-a0cd-f12ea6b8b29a 664361 0 2020-05-01 16:01:06 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-01 16:01:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 1 16:01:16.762: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9962 /api/v1/namespaces/watch-9962/configmaps/e2e-watch-test-configmap-b 23b53d95-319e-4ed4-a0cd-f12ea6b8b29a 664389 0 2020-05-01 16:01:06 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-01 16:01:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 1 16:01:16.763: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9962 /api/v1/namespaces/watch-9962/configmaps/e2e-watch-test-configmap-b 23b53d95-319e-4ed4-a0cd-f12ea6b8b29a 664389 0 2020-05-01 16:01:06 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-01 16:01:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:01:26.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9962" for this suite. • [SLOW TEST:60.491 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":134,"skipped":2426,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:01:26.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-36f84f20-b001-4d5a-bb13-1c2bf8237119 STEP: Creating a pod to test consume configMaps May 1 16:01:27.176: INFO: Waiting up to 5m0s for pod "pod-configmaps-5fd7e084-9949-421e-876f-f1b881f58ae8" in namespace "configmap-3783" to be "Succeeded or Failed" May 1 16:01:27.204: INFO: Pod "pod-configmaps-5fd7e084-9949-421e-876f-f1b881f58ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 28.686614ms May 1 16:01:29.263: INFO: Pod "pod-configmaps-5fd7e084-9949-421e-876f-f1b881f58ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087672406s May 1 16:01:31.267: INFO: Pod "pod-configmaps-5fd7e084-9949-421e-876f-f1b881f58ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091653486s May 1 16:01:33.373: INFO: Pod "pod-configmaps-5fd7e084-9949-421e-876f-f1b881f58ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19684239s May 1 16:01:35.376: INFO: Pod "pod-configmaps-5fd7e084-9949-421e-876f-f1b881f58ae8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.199910556s STEP: Saw pod success May 1 16:01:35.376: INFO: Pod "pod-configmaps-5fd7e084-9949-421e-876f-f1b881f58ae8" satisfied condition "Succeeded or Failed" May 1 16:01:35.378: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-5fd7e084-9949-421e-876f-f1b881f58ae8 container configmap-volume-test: STEP: delete the pod May 1 16:01:35.965: INFO: Waiting for pod pod-configmaps-5fd7e084-9949-421e-876f-f1b881f58ae8 to disappear May 1 16:01:36.017: INFO: Pod pod-configmaps-5fd7e084-9949-421e-876f-f1b881f58ae8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:01:36.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3783" for this suite. • [SLOW TEST:9.348 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":135,"skipped":2435,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:01:36.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-e9464d33-14dd-4b5b-955d-391cf6cd2fe4 STEP: Creating a pod to test consume secrets May 1 16:01:36.312: INFO: Waiting up to 5m0s for pod "pod-secrets-d586d560-00ea-4a68-85c0-2b2c9c8a31a7" in namespace "secrets-6302" to be "Succeeded or Failed" May 1 16:01:36.384: INFO: Pod "pod-secrets-d586d560-00ea-4a68-85c0-2b2c9c8a31a7": Phase="Pending", Reason="", readiness=false. Elapsed: 71.873481ms May 1 16:01:38.414: INFO: Pod "pod-secrets-d586d560-00ea-4a68-85c0-2b2c9c8a31a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102191993s May 1 16:01:40.418: INFO: Pod "pod-secrets-d586d560-00ea-4a68-85c0-2b2c9c8a31a7": Phase="Running", Reason="", readiness=true. Elapsed: 4.106168298s May 1 16:01:42.421: INFO: Pod "pod-secrets-d586d560-00ea-4a68-85c0-2b2c9c8a31a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.109418309s STEP: Saw pod success May 1 16:01:42.421: INFO: Pod "pod-secrets-d586d560-00ea-4a68-85c0-2b2c9c8a31a7" satisfied condition "Succeeded or Failed" May 1 16:01:42.424: INFO: Trying to get logs from node kali-worker pod pod-secrets-d586d560-00ea-4a68-85c0-2b2c9c8a31a7 container secret-env-test: STEP: delete the pod May 1 16:01:42.455: INFO: Waiting for pod pod-secrets-d586d560-00ea-4a68-85c0-2b2c9c8a31a7 to disappear May 1 16:01:42.460: INFO: Pod pod-secrets-d586d560-00ea-4a68-85c0-2b2c9c8a31a7 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:01:42.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6302" for this suite. • [SLOW TEST:6.367 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2461,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:01:42.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 May 1 16:01:42.554: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 1 16:01:42.635: INFO: Waiting for terminating namespaces to be deleted... May 1 16:01:42.637: INFO: Logging pods the kubelet thinks is on node kali-worker before test May 1 16:01:42.641: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 1 16:01:42.641: INFO: Container kindnet-cni ready: true, restart count 1 May 1 16:01:42.641: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 1 16:01:42.641: INFO: Container kube-proxy ready: true, restart count 0 May 1 16:01:42.641: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test May 1 16:01:42.650: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 1 16:01:42.650: INFO: Container kindnet-cni ready: true, restart count 0 May 1 16:01:42.650: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 1 16:01:42.650: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-972448d0-43cb-4d7a-9107-f88a703e73b2 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-972448d0-43cb-4d7a-9107-f88a703e73b2 off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-972448d0-43cb-4d7a-9107-f88a703e73b2 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:02:13.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7654" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:30.658 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":137,"skipped":2466,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:02:13.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium May 1 16:02:13.244: INFO: Waiting up to 5m0s for pod "pod-e46503be-474c-415e-a5a0-4c5c9362fe93" in namespace "emptydir-6311" to be "Succeeded or Failed" May 1 16:02:13.269: INFO: Pod "pod-e46503be-474c-415e-a5a0-4c5c9362fe93": Phase="Pending", Reason="", readiness=false. Elapsed: 25.667603ms May 1 16:02:15.272: INFO: Pod "pod-e46503be-474c-415e-a5a0-4c5c9362fe93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028735321s May 1 16:02:17.434: INFO: Pod "pod-e46503be-474c-415e-a5a0-4c5c9362fe93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190718519s May 1 16:02:19.438: INFO: Pod "pod-e46503be-474c-415e-a5a0-4c5c9362fe93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.194501313s STEP: Saw pod success May 1 16:02:19.438: INFO: Pod "pod-e46503be-474c-415e-a5a0-4c5c9362fe93" satisfied condition "Succeeded or Failed" May 1 16:02:19.455: INFO: Trying to get logs from node kali-worker pod pod-e46503be-474c-415e-a5a0-4c5c9362fe93 container test-container: STEP: delete the pod May 1 16:02:19.586: INFO: Waiting for pod pod-e46503be-474c-415e-a5a0-4c5c9362fe93 to disappear May 1 16:02:19.592: INFO: Pod pod-e46503be-474c-415e-a5a0-4c5c9362fe93 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:02:19.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6311" for this suite. • [SLOW TEST:6.452 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2485,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:02:19.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:02:37.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6277" for this suite. • [SLOW TEST:17.777 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":139,"skipped":2525,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:02:37.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium May 1 16:02:37.770: INFO: Waiting up to 5m0s for pod "pod-3d8e634c-35df-403b-8ad8-b85509ea6137" in namespace "emptydir-5094" to be "Succeeded or Failed" May 1 16:02:37.977: INFO: Pod "pod-3d8e634c-35df-403b-8ad8-b85509ea6137": Phase="Pending", Reason="", readiness=false. Elapsed: 206.914276ms May 1 16:02:40.247: INFO: Pod "pod-3d8e634c-35df-403b-8ad8-b85509ea6137": Phase="Pending", Reason="", readiness=false. Elapsed: 2.476243864s May 1 16:02:42.251: INFO: Pod "pod-3d8e634c-35df-403b-8ad8-b85509ea6137": Phase="Running", Reason="", readiness=true. Elapsed: 4.480416762s May 1 16:02:44.255: INFO: Pod "pod-3d8e634c-35df-403b-8ad8-b85509ea6137": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.484870075s STEP: Saw pod success May 1 16:02:44.255: INFO: Pod "pod-3d8e634c-35df-403b-8ad8-b85509ea6137" satisfied condition "Succeeded or Failed" May 1 16:02:44.259: INFO: Trying to get logs from node kali-worker pod pod-3d8e634c-35df-403b-8ad8-b85509ea6137 container test-container: STEP: delete the pod May 1 16:02:44.295: INFO: Waiting for pod pod-3d8e634c-35df-403b-8ad8-b85509ea6137 to disappear May 1 16:02:44.311: INFO: Pod pod-3d8e634c-35df-403b-8ad8-b85509ea6137 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:02:44.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5094" for this suite. • [SLOW TEST:6.939 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2608,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:02:44.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium May 1 16:02:44.421: INFO: Waiting up to 5m0s for pod "pod-b408ad07-1886-4d22-a5d6-b78d48a04926" in namespace "emptydir-2612" to be "Succeeded or Failed" May 1 16:02:44.440: INFO: Pod "pod-b408ad07-1886-4d22-a5d6-b78d48a04926": Phase="Pending", Reason="", readiness=false. Elapsed: 19.27079ms May 1 16:02:46.450: INFO: Pod "pod-b408ad07-1886-4d22-a5d6-b78d48a04926": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029393036s May 1 16:02:48.454: INFO: Pod "pod-b408ad07-1886-4d22-a5d6-b78d48a04926": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03294921s STEP: Saw pod success May 1 16:02:48.454: INFO: Pod "pod-b408ad07-1886-4d22-a5d6-b78d48a04926" satisfied condition "Succeeded or Failed" May 1 16:02:48.456: INFO: Trying to get logs from node kali-worker pod pod-b408ad07-1886-4d22-a5d6-b78d48a04926 container test-container: STEP: delete the pod May 1 16:02:48.492: INFO: Waiting for pod pod-b408ad07-1886-4d22-a5d6-b78d48a04926 to disappear May 1 16:02:48.498: INFO: Pod pod-b408ad07-1886-4d22-a5d6-b78d48a04926 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:02:48.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2612" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2609,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:02:48.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-8291 STEP: creating a selector STEP: Creating the service pods in kubernetes May 1 16:02:48.584: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 1 16:02:48.708: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 1 16:02:50.712: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 1 16:02:52.712: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 1 16:02:54.712: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 16:02:56.712: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 16:02:58.712: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 16:03:00.712: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 16:03:02.712: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 16:03:04.876: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 16:03:06.712: INFO: The status of Pod netserver-0 is Running (Ready = false) May 1 16:03:08.712: INFO: The status of Pod netserver-0 is Running (Ready = true) May 1 16:03:08.719: INFO: The status of Pod netserver-1 is Running (Ready = false) May 1 16:03:10.724: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 1 16:03:16.830: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.28 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8291 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 16:03:16.830: INFO: >>> kubeConfig: /root/.kube/config I0501 16:03:16.858490 7 log.go:172] (0xc002a74210) (0xc001c43c20) Create stream I0501 16:03:16.858527 7 log.go:172] (0xc002a74210) (0xc001c43c20) Stream added, broadcasting: 1 I0501 16:03:16.860604 7 log.go:172] (0xc002a74210) Reply frame received for 1 I0501 16:03:16.860667 7 log.go:172] (0xc002a74210) (0xc0018b80a0) Create stream I0501 16:03:16.860686 7 log.go:172] (0xc002a74210) (0xc0018b80a0) Stream added, broadcasting: 3 I0501 16:03:16.862328 7 log.go:172] (0xc002a74210) Reply frame received for 3 I0501 16:03:16.862377 7 log.go:172] (0xc002a74210) (0xc0018b8140) Create stream I0501 16:03:16.862400 7 log.go:172] (0xc002a74210) (0xc0018b8140) Stream added, broadcasting: 5 I0501 16:03:16.863423 7 log.go:172] (0xc002a74210) Reply frame received for 5 I0501 16:03:17.936674 7 log.go:172] (0xc002a74210) Data frame received for 3 I0501 16:03:17.936702 7 log.go:172] (0xc0018b80a0) (3) Data frame handling I0501 16:03:17.936721 7 log.go:172] (0xc0018b80a0) (3) Data frame sent I0501 16:03:17.936735 7 log.go:172] (0xc002a74210) Data frame received for 3 I0501 16:03:17.936757 7 log.go:172] (0xc0018b80a0) (3) Data frame handling I0501 16:03:17.936961 7 log.go:172] (0xc002a74210) Data frame received for 5 I0501 16:03:17.936992 7 log.go:172] (0xc0018b8140) (5) Data frame handling I0501 16:03:17.939184 7 log.go:172] (0xc002a74210) Data frame received for 1 I0501 16:03:17.939204 7 log.go:172] (0xc001c43c20) (1) Data frame handling I0501 16:03:17.939220 7 log.go:172] (0xc001c43c20) (1) Data frame sent I0501 16:03:17.939235 7 log.go:172] (0xc002a74210) (0xc001c43c20) Stream removed, broadcasting: 1 I0501 16:03:17.939251 7 log.go:172] (0xc002a74210) Go away received I0501 16:03:17.939299 7 log.go:172] (0xc002a74210) (0xc001c43c20) Stream removed, broadcasting: 1 I0501 16:03:17.939314 7 log.go:172] (0xc002a74210) (0xc0018b80a0) Stream removed, broadcasting: 3 I0501 16:03:17.939322 7 log.go:172] (0xc002a74210) (0xc0018b8140) Stream removed, broadcasting: 5 May 1 16:03:17.939: INFO: Found all expected endpoints: [netserver-0] May 1 16:03:17.942: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.252 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8291 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 1 16:03:17.942: INFO: >>> kubeConfig: /root/.kube/config I0501 16:03:17.966711 7 log.go:172] (0xc002a1ec60) (0xc0018b8d20) Create stream I0501 16:03:17.966736 7 log.go:172] (0xc002a1ec60) (0xc0018b8d20) Stream added, broadcasting: 1 I0501 16:03:17.969676 7 log.go:172] (0xc002a1ec60) Reply frame received for 1 I0501 16:03:17.969710 7 log.go:172] (0xc002a1ec60) (0xc0018b8dc0) Create stream I0501 16:03:17.969723 7 log.go:172] (0xc002a1ec60) (0xc0018b8dc0) Stream added, broadcasting: 3 I0501 16:03:17.970668 7 log.go:172] (0xc002a1ec60) Reply frame received for 3 I0501 16:03:17.970701 7 log.go:172] (0xc002a1ec60) (0xc0018b8e60) Create stream I0501 16:03:17.970713 7 log.go:172] (0xc002a1ec60) (0xc0018b8e60) Stream added, broadcasting: 5 I0501 16:03:17.972683 7 log.go:172] (0xc002a1ec60) Reply frame received for 5 I0501 16:03:19.052241 7 log.go:172] (0xc002a1ec60) Data frame received for 3 I0501 16:03:19.052290 7 log.go:172] (0xc0018b8dc0) (3) Data frame handling I0501 16:03:19.052325 7 log.go:172] (0xc0018b8dc0) (3) Data frame sent I0501 16:03:19.052352 7 log.go:172] (0xc002a1ec60) Data frame received for 5 I0501 16:03:19.052370 7 log.go:172] (0xc0018b8e60) (5) Data frame handling I0501 16:03:19.052975 7 log.go:172] (0xc002a1ec60) Data frame received for 3 I0501 16:03:19.052996 7 log.go:172] (0xc0018b8dc0) (3) Data frame handling I0501 16:03:19.061749 7 log.go:172] (0xc002a1ec60) Data frame received for 1 I0501 16:03:19.061814 7 log.go:172] (0xc0018b8d20) (1) Data frame handling I0501 16:03:19.061843 7 log.go:172] (0xc0018b8d20) (1) Data frame sent I0501 16:03:19.061862 7 log.go:172] (0xc002a1ec60) (0xc0018b8d20) Stream removed, broadcasting: 1 I0501 16:03:19.061880 7 log.go:172] (0xc002a1ec60) Go away received I0501 16:03:19.062034 7 log.go:172] (0xc002a1ec60) (0xc0018b8d20) Stream removed, broadcasting: 1 I0501 16:03:19.062051 7 log.go:172] (0xc002a1ec60) (0xc0018b8dc0) Stream removed, broadcasting: 3 I0501 16:03:19.062061 7 log.go:172] (0xc002a1ec60) (0xc0018b8e60) Stream removed, broadcasting: 5 May 1 16:03:19.062: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:03:19.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8291" for this suite. • [SLOW TEST:30.561 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2620,"failed":0} [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:03:19.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode May 1 16:03:19.149: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2377" to be "Succeeded or Failed" May 1 16:03:19.172: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 23.451626ms May 1 16:03:21.177: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027909499s May 1 16:03:23.181: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031917641s May 1 16:03:25.584: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 6.434714773s May 1 16:03:27.661: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.51162523s STEP: Saw pod success May 1 16:03:27.661: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 1 16:03:27.738: INFO: Trying to get logs from node kali-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 1 16:03:27.894: INFO: Waiting for pod pod-host-path-test to disappear May 1 16:03:27.984: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:03:27.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-2377" for this suite. • [SLOW TEST:8.929 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2620,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:03:27.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 1 16:03:29.220: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7572814a-9b44-4c0a-bf0b-5f556809e028" in namespace "projected-6432" to be "Succeeded or Failed" May 1 16:03:29.412: INFO: Pod "downwardapi-volume-7572814a-9b44-4c0a-bf0b-5f556809e028": Phase="Pending", Reason="", readiness=false. Elapsed: 191.679611ms May 1 16:03:32.026: INFO: Pod "downwardapi-volume-7572814a-9b44-4c0a-bf0b-5f556809e028": Phase="Pending", Reason="", readiness=false. Elapsed: 2.806405333s May 1 16:03:34.038: INFO: Pod "downwardapi-volume-7572814a-9b44-4c0a-bf0b-5f556809e028": Phase="Running", Reason="", readiness=true. Elapsed: 4.818256025s May 1 16:03:36.042: INFO: Pod "downwardapi-volume-7572814a-9b44-4c0a-bf0b-5f556809e028": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.821756487s STEP: Saw pod success May 1 16:03:36.042: INFO: Pod "downwardapi-volume-7572814a-9b44-4c0a-bf0b-5f556809e028" satisfied condition "Succeeded or Failed" May 1 16:03:36.045: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-7572814a-9b44-4c0a-bf0b-5f556809e028 container client-container: STEP: delete the pod May 1 16:03:36.167: INFO: Waiting for pod downwardapi-volume-7572814a-9b44-4c0a-bf0b-5f556809e028 to disappear May 1 16:03:36.182: INFO: Pod downwardapi-volume-7572814a-9b44-4c0a-bf0b-5f556809e028 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:03:36.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6432" for this suite. • [SLOW TEST:8.191 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:03:36.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4206, will wait for the garbage collector to delete the pods May 1 16:03:42.402: INFO: Deleting Job.batch foo took: 6.141512ms May 1 16:03:42.902: INFO: Terminating Job.batch foo pods took: 500.265903ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:04:24.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4206" for this suite. • [SLOW TEST:47.970 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":145,"skipped":2677,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:04:24.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-87ee45f5-32c9-4f9e-873b-73a79ebb0189 STEP: Creating a pod to test consume configMaps May 1 16:04:24.478: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d333b5c3-776e-4df8-ac5c-997c92310939" in namespace "projected-3859" to be "Succeeded or Failed" May 1 16:04:24.503: INFO: Pod "pod-projected-configmaps-d333b5c3-776e-4df8-ac5c-997c92310939": Phase="Pending", Reason="", readiness=false. Elapsed: 25.329369ms May 1 16:04:26.620: INFO: Pod "pod-projected-configmaps-d333b5c3-776e-4df8-ac5c-997c92310939": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142307862s May 1 16:04:28.656: INFO: Pod "pod-projected-configmaps-d333b5c3-776e-4df8-ac5c-997c92310939": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177936487s May 1 16:04:30.659: INFO: Pod "pod-projected-configmaps-d333b5c3-776e-4df8-ac5c-997c92310939": Phase="Running", Reason="", readiness=true. Elapsed: 6.181847009s May 1 16:04:32.665: INFO: Pod "pod-projected-configmaps-d333b5c3-776e-4df8-ac5c-997c92310939": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.187457826s STEP: Saw pod success May 1 16:04:32.665: INFO: Pod "pod-projected-configmaps-d333b5c3-776e-4df8-ac5c-997c92310939" satisfied condition "Succeeded or Failed" May 1 16:04:32.668: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-d333b5c3-776e-4df8-ac5c-997c92310939 container projected-configmap-volume-test: STEP: delete the pod May 1 16:04:32.747: INFO: Waiting for pod pod-projected-configmaps-d333b5c3-776e-4df8-ac5c-997c92310939 to disappear May 1 16:04:32.763: INFO: Pod pod-projected-configmaps-d333b5c3-776e-4df8-ac5c-997c92310939 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:04:32.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3859" for this suite. • [SLOW TEST:8.611 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2687,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:04:32.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod May 1 16:04:32.825: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-729' May 1 16:04:36.104: INFO: stderr: "" May 1 16:04:36.104: INFO: stdout: "pod/pause created\n" May 1 16:04:36.104: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 1 16:04:36.104: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-729" to be "running and ready" May 1 16:04:36.164: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 60.122554ms May 1 16:04:38.263: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158542833s May 1 16:04:40.267: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.162441786s May 1 16:04:40.267: INFO: Pod "pause" satisfied condition "running and ready" May 1 16:04:40.267: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod May 1 16:04:40.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-729' May 1 16:04:40.379: INFO: stderr: "" May 1 16:04:40.379: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 1 16:04:40.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-729' May 1 16:04:40.492: INFO: stderr: "" May 1 16:04:40.492: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 1 16:04:40.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-729' May 1 16:04:40.618: INFO: stderr: "" May 1 16:04:40.618: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 1 16:04:40.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-729' May 1 16:04:40.715: INFO: stderr: "" May 1 16:04:40.715: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources May 1 16:04:40.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-729' May 1 16:04:40.876: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 1 16:04:40.876: INFO: stdout: "pod \"pause\" force deleted\n" May 1 16:04:40.876: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-729' May 1 16:04:41.170: INFO: stderr: "No resources found in kubectl-729 namespace.\n" May 1 16:04:41.170: INFO: stdout: "" May 1 16:04:41.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-729 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 1 16:04:41.325: INFO: stderr: "" May 1 16:04:41.325: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:04:41.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-729" for this suite. • [SLOW TEST:8.562 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":147,"skipped":2695,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:04:41.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-xvxj STEP: Creating a pod to test atomic-volume-subpath May 1 16:04:41.449: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xvxj" in namespace "subpath-8283" to be "Succeeded or Failed" May 1 16:04:41.544: INFO: Pod "pod-subpath-test-configmap-xvxj": Phase="Pending", Reason="", readiness=false. Elapsed: 95.411664ms May 1 16:04:43.549: INFO: Pod "pod-subpath-test-configmap-xvxj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100041771s May 1 16:04:45.555: INFO: Pod "pod-subpath-test-configmap-xvxj": Phase="Running", Reason="", readiness=true. Elapsed: 4.106088831s May 1 16:04:47.583: INFO: Pod "pod-subpath-test-configmap-xvxj": Phase="Running", Reason="", readiness=true. Elapsed: 6.133931931s May 1 16:04:49.596: INFO: Pod "pod-subpath-test-configmap-xvxj": Phase="Running", Reason="", readiness=true. Elapsed: 8.147085301s May 1 16:04:51.600: INFO: Pod "pod-subpath-test-configmap-xvxj": Phase="Running", Reason="", readiness=true. Elapsed: 10.151262339s May 1 16:04:53.604: INFO: Pod "pod-subpath-test-configmap-xvxj": Phase="Running", Reason="", readiness=true. Elapsed: 12.155457058s May 1 16:04:55.830: INFO: Pod "pod-subpath-test-configmap-xvxj": Phase="Running", Reason="", readiness=true. Elapsed: 14.380825192s May 1 16:04:57.834: INFO: Pod "pod-subpath-test-configmap-xvxj": Phase="Running", Reason="", readiness=true. Elapsed: 16.384934996s May 1 16:04:59.837: INFO: Pod "pod-subpath-test-configmap-xvxj": Phase="Running", Reason="", readiness=true. Elapsed: 18.388465799s May 1 16:05:01.843: INFO: Pod "pod-subpath-test-configmap-xvxj": Phase="Running", Reason="", readiness=true. Elapsed: 20.393929018s May 1 16:05:03.854: INFO: Pod "pod-subpath-test-configmap-xvxj": Phase="Running", Reason="", readiness=true. Elapsed: 22.404852198s May 1 16:05:05.870: INFO: Pod "pod-subpath-test-configmap-xvxj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.421465802s STEP: Saw pod success May 1 16:05:05.870: INFO: Pod "pod-subpath-test-configmap-xvxj" satisfied condition "Succeeded or Failed" May 1 16:05:05.873: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-xvxj container test-container-subpath-configmap-xvxj: STEP: delete the pod May 1 16:05:05.956: INFO: Waiting for pod pod-subpath-test-configmap-xvxj to disappear May 1 16:05:06.002: INFO: Pod pod-subpath-test-configmap-xvxj no longer exists STEP: Deleting pod pod-subpath-test-configmap-xvxj May 1 16:05:06.002: INFO: Deleting pod "pod-subpath-test-configmap-xvxj" in namespace "subpath-8283" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:05:06.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8283" for this suite. • [SLOW TEST:24.675 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":148,"skipped":2702,"failed":0} [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:05:06.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:05:12.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-133" for this suite. • [SLOW TEST:6.505 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2702,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:05:12.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0501 16:05:23.968717 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 1 16:05:23.968: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:05:23.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8805" for this suite. • [SLOW TEST:11.785 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":150,"skipped":2723,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:05:24.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-c155650b-5662-4363-b64b-d6f4deb457d0 STEP: Creating a pod to test consume secrets May 1 16:05:24.616: INFO: Waiting up to 5m0s for pod "pod-secrets-3159b9d2-e4e7-4421-b413-071c66462281" in namespace "secrets-1209" to be "Succeeded or Failed" May 1 16:05:24.640: INFO: Pod "pod-secrets-3159b9d2-e4e7-4421-b413-071c66462281": Phase="Pending", Reason="", readiness=false. Elapsed: 24.080533ms May 1 16:05:26.643: INFO: Pod "pod-secrets-3159b9d2-e4e7-4421-b413-071c66462281": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0269917s May 1 16:05:28.662: INFO: Pod "pod-secrets-3159b9d2-e4e7-4421-b413-071c66462281": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046523352s STEP: Saw pod success May 1 16:05:28.662: INFO: Pod "pod-secrets-3159b9d2-e4e7-4421-b413-071c66462281" satisfied condition "Succeeded or Failed" May 1 16:05:28.666: INFO: Trying to get logs from node kali-worker pod pod-secrets-3159b9d2-e4e7-4421-b413-071c66462281 container secret-volume-test: STEP: delete the pod May 1 16:05:28.999: INFO: Waiting for pod pod-secrets-3159b9d2-e4e7-4421-b413-071c66462281 to disappear May 1 16:05:29.272: INFO: Pod pod-secrets-3159b9d2-e4e7-4421-b413-071c66462281 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:05:29.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1209" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2724,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:05:29.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 1 16:05:29.453: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f94d14c1-dc0d-4417-bcb7-491a03a7b09f" in namespace "downward-api-9791" to be "Succeeded or Failed" May 1 16:05:29.508: INFO: Pod "downwardapi-volume-f94d14c1-dc0d-4417-bcb7-491a03a7b09f": Phase="Pending", Reason="", readiness=false. Elapsed: 55.28349ms May 1 16:05:31.644: INFO: Pod "downwardapi-volume-f94d14c1-dc0d-4417-bcb7-491a03a7b09f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190966743s May 1 16:05:33.649: INFO: Pod "downwardapi-volume-f94d14c1-dc0d-4417-bcb7-491a03a7b09f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.196055626s STEP: Saw pod success May 1 16:05:33.649: INFO: Pod "downwardapi-volume-f94d14c1-dc0d-4417-bcb7-491a03a7b09f" satisfied condition "Succeeded or Failed" May 1 16:05:33.652: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-f94d14c1-dc0d-4417-bcb7-491a03a7b09f container client-container: STEP: delete the pod May 1 16:05:33.741: INFO: Waiting for pod downwardapi-volume-f94d14c1-dc0d-4417-bcb7-491a03a7b09f to disappear May 1 16:05:33.748: INFO: Pod downwardapi-volume-f94d14c1-dc0d-4417-bcb7-491a03a7b09f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:05:33.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9791" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2735,"failed":0} SS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:05:33.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-c46a67c4-9336-4b64-b9a8-6d1c863b47bb STEP: Creating secret with name secret-projected-all-test-volume-0be4de11-5823-4ab0-abe5-b8d26a6445a5 STEP: Creating a pod to test Check all projections for projected volume plugin May 1 16:05:33.828: INFO: Waiting up to 5m0s for pod "projected-volume-5bed4cbd-dc69-454e-b006-85ea282b25a7" in namespace "projected-5234" to be "Succeeded or Failed" May 1 16:05:33.859: INFO: Pod "projected-volume-5bed4cbd-dc69-454e-b006-85ea282b25a7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.496863ms May 1 16:05:35.863: INFO: Pod "projected-volume-5bed4cbd-dc69-454e-b006-85ea282b25a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034870907s May 1 16:05:37.867: INFO: Pod "projected-volume-5bed4cbd-dc69-454e-b006-85ea282b25a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038373275s STEP: Saw pod success May 1 16:05:37.867: INFO: Pod "projected-volume-5bed4cbd-dc69-454e-b006-85ea282b25a7" satisfied condition "Succeeded or Failed" May 1 16:05:37.868: INFO: Trying to get logs from node kali-worker2 pod projected-volume-5bed4cbd-dc69-454e-b006-85ea282b25a7 container projected-all-volume-test: STEP: delete the pod May 1 16:05:37.898: INFO: Waiting for pod projected-volume-5bed4cbd-dc69-454e-b006-85ea282b25a7 to disappear May 1 16:05:37.920: INFO: Pod projected-volume-5bed4cbd-dc69-454e-b006-85ea282b25a7 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:05:37.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5234" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2737,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:05:37.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:05:38.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6582" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":154,"skipped":2757,"failed":0} SSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:05:38.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-6bcdafde-4cd0-43fb-8f56-3defaff6fa15 in namespace container-probe-7787 May 1 16:05:42.218: INFO: Started pod busybox-6bcdafde-4cd0-43fb-8f56-3defaff6fa15 in namespace container-probe-7787 STEP: checking the pod's current state and verifying that restartCount is present May 1 16:05:42.220: INFO: Initial restart count of pod busybox-6bcdafde-4cd0-43fb-8f56-3defaff6fa15 is 0 May 1 16:06:35.473: INFO: Restart count of pod container-probe-7787/busybox-6bcdafde-4cd0-43fb-8f56-3defaff6fa15 is now 1 (53.252260189s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:06:35.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7787" for this suite. • [SLOW TEST:57.534 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2760,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:06:35.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 16:06:36.501: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 16:06:38.640: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945996, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945996, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945996, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723945996, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 16:06:41.694: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:06:41.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5277" for this suite. STEP: Destroying namespace "webhook-5277-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.241 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":156,"skipped":2768,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:06:41.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-6acea68e-da12-4a1e-bd79-e038d8b47875 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-6acea68e-da12-4a1e-bd79-e038d8b47875 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:06:49.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8168" for this suite. • [SLOW TEST:8.178 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2771,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:06:49.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-4fd9 STEP: Creating a pod to test atomic-volume-subpath May 1 16:06:50.136: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-4fd9" in namespace "subpath-8446" to be "Succeeded or Failed" May 1 16:06:50.171: INFO: Pod "pod-subpath-test-projected-4fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 34.662658ms May 1 16:06:52.399: INFO: Pod "pod-subpath-test-projected-4fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.262462715s May 1 16:06:54.403: INFO: Pod "pod-subpath-test-projected-4fd9": Phase="Running", Reason="", readiness=true. Elapsed: 4.266454215s May 1 16:06:56.407: INFO: Pod "pod-subpath-test-projected-4fd9": Phase="Running", Reason="", readiness=true. Elapsed: 6.270407665s May 1 16:06:58.411: INFO: Pod "pod-subpath-test-projected-4fd9": Phase="Running", Reason="", readiness=true. Elapsed: 8.27421205s May 1 16:07:00.459: INFO: Pod "pod-subpath-test-projected-4fd9": Phase="Running", Reason="", readiness=true. Elapsed: 10.32261137s May 1 16:07:02.463: INFO: Pod "pod-subpath-test-projected-4fd9": Phase="Running", Reason="", readiness=true. Elapsed: 12.327074421s May 1 16:07:04.468: INFO: Pod "pod-subpath-test-projected-4fd9": Phase="Running", Reason="", readiness=true. Elapsed: 14.331181444s May 1 16:07:06.472: INFO: Pod "pod-subpath-test-projected-4fd9": Phase="Running", Reason="", readiness=true. Elapsed: 16.335842911s May 1 16:07:08.477: INFO: Pod "pod-subpath-test-projected-4fd9": Phase="Running", Reason="", readiness=true. Elapsed: 18.340677101s May 1 16:07:10.482: INFO: Pod "pod-subpath-test-projected-4fd9": Phase="Running", Reason="", readiness=true. Elapsed: 20.345179758s May 1 16:07:12.487: INFO: Pod "pod-subpath-test-projected-4fd9": Phase="Running", Reason="", readiness=true. Elapsed: 22.350148815s May 1 16:07:14.492: INFO: Pod "pod-subpath-test-projected-4fd9": Phase="Running", Reason="", readiness=true. Elapsed: 24.355292421s May 1 16:07:16.496: INFO: Pod "pod-subpath-test-projected-4fd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.359991598s STEP: Saw pod success May 1 16:07:16.496: INFO: Pod "pod-subpath-test-projected-4fd9" satisfied condition "Succeeded or Failed" May 1 16:07:16.499: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-projected-4fd9 container test-container-subpath-projected-4fd9: STEP: delete the pod May 1 16:07:16.532: INFO: Waiting for pod pod-subpath-test-projected-4fd9 to disappear May 1 16:07:16.548: INFO: Pod pod-subpath-test-projected-4fd9 no longer exists STEP: Deleting pod pod-subpath-test-projected-4fd9 May 1 16:07:16.548: INFO: Deleting pod "pod-subpath-test-projected-4fd9" in namespace "subpath-8446" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:07:16.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8446" for this suite. • [SLOW TEST:26.579 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":158,"skipped":2776,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:07:16.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 1 16:07:17.218: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 1 16:07:19.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946037, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946037, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946037, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946037, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 16:07:22.831: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 16:07:23.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:07:25.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4461" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.236 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":159,"skipped":2805,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:07:25.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1259.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1259.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1259.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1259.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1259.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1259.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 1 16:07:34.926: INFO: DNS probes using dns-1259/dns-test-ddcbc4d9-c754-49e2-a55b-c50ace57096b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:07:35.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1259" for this suite. • [SLOW TEST:9.723 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":160,"skipped":2812,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:07:35.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-625 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet May 1 16:07:36.234: INFO: Found 0 stateful pods, waiting for 3 May 1 16:07:46.544: INFO: Found 2 stateful pods, waiting for 3 May 1 16:07:56.435: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 16:07:56.435: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 16:07:56.435: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 1 16:07:56.461: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 1 16:08:06.975: INFO: Updating stateful set ss2 May 1 16:08:07.001: INFO: Waiting for Pod statefulset-625/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 1 16:08:17.953: INFO: Found 2 stateful pods, waiting for 3 May 1 16:08:28.048: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 1 16:08:28.048: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 1 16:08:28.048: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 1 16:08:28.272: INFO: Updating stateful set ss2 May 1 16:08:28.730: INFO: Waiting for Pod statefulset-625/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 1 16:08:40.041: INFO: Updating stateful set ss2 May 1 16:08:40.436: INFO: Waiting for StatefulSet statefulset-625/ss2 to complete update May 1 16:08:40.436: INFO: Waiting for Pod statefulset-625/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 1 16:08:50.443: INFO: Waiting for StatefulSet statefulset-625/ss2 to complete update May 1 16:08:50.443: INFO: Waiting for Pod statefulset-625/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 1 16:09:00.558: INFO: Deleting all statefulset in ns statefulset-625 May 1 16:09:00.561: INFO: Scaling statefulset ss2 to 0 May 1 16:09:40.790: INFO: Waiting for statefulset status.replicas updated to 0 May 1 16:09:40.793: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:09:40.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-625" for this suite. • [SLOW TEST:125.296 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":161,"skipped":2823,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:09:40.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 1 16:09:40.935: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1941 /api/v1/namespaces/watch-1941/configmaps/e2e-watch-test-label-changed d58427d5-720e-45a7-a31a-3bd283c2ad5d 666970 0 2020-05-01 16:09:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-01 16:09:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 1 16:09:40.935: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1941 /api/v1/namespaces/watch-1941/configmaps/e2e-watch-test-label-changed d58427d5-720e-45a7-a31a-3bd283c2ad5d 666972 0 2020-05-01 16:09:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-01 16:09:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 1 16:09:40.935: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1941 /api/v1/namespaces/watch-1941/configmaps/e2e-watch-test-label-changed d58427d5-720e-45a7-a31a-3bd283c2ad5d 666973 0 2020-05-01 16:09:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-01 16:09:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 1 16:09:51.492: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1941 /api/v1/namespaces/watch-1941/configmaps/e2e-watch-test-label-changed d58427d5-720e-45a7-a31a-3bd283c2ad5d 667070 0 2020-05-01 16:09:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-01 16:09:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 1 16:09:51.493: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1941 /api/v1/namespaces/watch-1941/configmaps/e2e-watch-test-label-changed d58427d5-720e-45a7-a31a-3bd283c2ad5d 667071 0 2020-05-01 16:09:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-01 16:09:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 1 16:09:51.493: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1941 /api/v1/namespaces/watch-1941/configmaps/e2e-watch-test-label-changed d58427d5-720e-45a7-a31a-3bd283c2ad5d 667072 0 2020-05-01 16:09:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-01 16:09:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:09:51.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1941" for this suite. • [SLOW TEST:10.860 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":162,"skipped":2858,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:09:51.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-0bd3e907-00c0-428f-8ec6-b2eea0864dfb STEP: Creating secret with name s-test-opt-upd-d4766832-2188-4c21-b489-f096d1b6eb27 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0bd3e907-00c0-428f-8ec6-b2eea0864dfb STEP: Updating secret s-test-opt-upd-d4766832-2188-4c21-b489-f096d1b6eb27 STEP: Creating secret with name s-test-opt-create-7edd464c-16e5-4862-9527-70e28f907baf STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:11:23.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7266" for this suite. • [SLOW TEST:91.886 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2885,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:11:23.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 1 16:11:24.869: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 1 16:11:27.036: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946284, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946284, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946285, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946284, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 16:11:29.383: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946284, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946284, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946285, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946284, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 16:11:31.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946284, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946284, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946285, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946284, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 1 16:11:33.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946284, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946284, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946285, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946284, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 1 16:11:36.071: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:11:50.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3341" for this suite. STEP: Destroying namespace "webhook-3341-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:26.929 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":164,"skipped":2903,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:11:50.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9133 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-9133 I0501 16:11:50.677511 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9133, replica count: 2 I0501 16:11:53.727987 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0501 16:11:56.728196 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 1 16:11:56.728: INFO: Creating new exec pod May 1 16:12:04.248: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9133 execpodlx8xv -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 1 16:12:04.461: INFO: stderr: "I0501 16:12:04.382597 2770 log.go:172] (0xc000aaf760) (0xc000a7a960) Create stream\nI0501 16:12:04.382685 2770 log.go:172] (0xc000aaf760) (0xc000a7a960) Stream added, broadcasting: 1\nI0501 16:12:04.387885 2770 log.go:172] (0xc000aaf760) Reply frame received for 1\nI0501 16:12:04.387950 2770 log.go:172] (0xc000aaf760) (0xc0005cd5e0) Create stream\nI0501 16:12:04.387971 2770 log.go:172] (0xc000aaf760) (0xc0005cd5e0) Stream added, broadcasting: 3\nI0501 16:12:04.389435 2770 log.go:172] (0xc000aaf760) Reply frame received for 3\nI0501 16:12:04.389479 2770 log.go:172] (0xc000aaf760) (0xc0003eca00) Create stream\nI0501 16:12:04.389497 2770 log.go:172] (0xc000aaf760) (0xc0003eca00) Stream added, broadcasting: 5\nI0501 16:12:04.390679 2770 log.go:172] (0xc000aaf760) Reply frame received for 5\nI0501 16:12:04.453442 2770 log.go:172] (0xc000aaf760) Data frame received for 5\nI0501 16:12:04.453480 2770 log.go:172] (0xc0003eca00) (5) Data frame handling\nI0501 16:12:04.453507 2770 log.go:172] (0xc0003eca00) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0501 16:12:04.453724 2770 log.go:172] (0xc000aaf760) Data frame received for 5\nI0501 16:12:04.453762 2770 log.go:172] (0xc0003eca00) (5) Data frame handling\nI0501 16:12:04.453813 2770 log.go:172] (0xc0003eca00) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0501 16:12:04.453885 2770 log.go:172] (0xc000aaf760) Data frame received for 5\nI0501 16:12:04.453904 2770 log.go:172] (0xc0003eca00) (5) Data frame handling\nI0501 16:12:04.454110 2770 log.go:172] (0xc000aaf760) Data frame received for 3\nI0501 16:12:04.454121 2770 log.go:172] (0xc0005cd5e0) (3) Data frame handling\nI0501 16:12:04.455814 2770 log.go:172] (0xc000aaf760) Data frame received for 1\nI0501 16:12:04.455847 2770 log.go:172] (0xc000a7a960) (1) Data frame handling\nI0501 16:12:04.455866 2770 log.go:172] (0xc000a7a960) (1) Data frame sent\nI0501 16:12:04.455891 2770 log.go:172] (0xc000aaf760) (0xc000a7a960) Stream removed, broadcasting: 1\nI0501 16:12:04.455930 2770 log.go:172] (0xc000aaf760) Go away received\nI0501 16:12:04.456285 2770 log.go:172] (0xc000aaf760) (0xc000a7a960) Stream removed, broadcasting: 1\nI0501 16:12:04.456307 2770 log.go:172] (0xc000aaf760) (0xc0005cd5e0) Stream removed, broadcasting: 3\nI0501 16:12:04.456318 2770 log.go:172] (0xc000aaf760) (0xc0003eca00) Stream removed, broadcasting: 5\n" May 1 16:12:04.461: INFO: stdout: "" May 1 16:12:04.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9133 execpodlx8xv -- /bin/sh -x -c nc -zv -t -w 2 10.102.220.19 80' May 1 16:12:04.667: INFO: stderr: "I0501 16:12:04.582757 2792 log.go:172] (0xc000a7f290) (0xc000a2e780) Create stream\nI0501 16:12:04.582818 2792 log.go:172] (0xc000a7f290) (0xc000a2e780) Stream added, broadcasting: 1\nI0501 16:12:04.587283 2792 log.go:172] (0xc000a7f290) Reply frame received for 1\nI0501 16:12:04.587311 2792 log.go:172] (0xc000a7f290) (0xc00062f5e0) Create stream\nI0501 16:12:04.587319 2792 log.go:172] (0xc000a7f290) (0xc00062f5e0) Stream added, broadcasting: 3\nI0501 16:12:04.588317 2792 log.go:172] (0xc000a7f290) Reply frame received for 3\nI0501 16:12:04.588359 2792 log.go:172] (0xc000a7f290) (0xc000454a00) Create stream\nI0501 16:12:04.588371 2792 log.go:172] (0xc000a7f290) (0xc000454a00) Stream added, broadcasting: 5\nI0501 16:12:04.589543 2792 log.go:172] (0xc000a7f290) Reply frame received for 5\nI0501 16:12:04.658457 2792 log.go:172] (0xc000a7f290) Data frame received for 3\nI0501 16:12:04.658491 2792 log.go:172] (0xc00062f5e0) (3) Data frame handling\nI0501 16:12:04.661603 2792 log.go:172] (0xc000a7f290) Data frame received for 5\nI0501 16:12:04.661639 2792 log.go:172] (0xc000454a00) (5) Data frame handling\nI0501 16:12:04.661653 2792 log.go:172] (0xc000454a00) (5) Data frame sent\nI0501 16:12:04.661663 2792 log.go:172] (0xc000a7f290) Data frame received for 5\nI0501 16:12:04.661704 2792 log.go:172] (0xc000a7f290) Data frame received for 1\nI0501 16:12:04.661714 2792 log.go:172] (0xc000a2e780) (1) Data frame handling\nI0501 16:12:04.661723 2792 log.go:172] (0xc000a2e780) (1) Data frame sent\nI0501 16:12:04.661733 2792 log.go:172] (0xc000a7f290) (0xc000a2e780) Stream removed, broadcasting: 1\nI0501 16:12:04.661765 2792 log.go:172] (0xc000454a00) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.220.19 80\nConnection to 10.102.220.19 80 port [tcp/http] succeeded!\nI0501 16:12:04.661926 2792 log.go:172] (0xc000a7f290) Go away received\nI0501 16:12:04.662465 2792 log.go:172] (0xc000a7f290) (0xc000a2e780) Stream removed, broadcasting: 1\nI0501 16:12:04.662482 2792 log.go:172] (0xc000a7f290) (0xc00062f5e0) Stream removed, broadcasting: 3\nI0501 16:12:04.662491 2792 log.go:172] (0xc000a7f290) (0xc000454a00) Stream removed, broadcasting: 5\n" May 1 16:12:04.667: INFO: stdout: "" May 1 16:12:04.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9133 execpodlx8xv -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 31711' May 1 16:12:04.900: INFO: stderr: "I0501 16:12:04.785268 2813 log.go:172] (0xc000af89a0) (0xc000a2a5a0) Create stream\nI0501 16:12:04.785335 2813 log.go:172] (0xc000af89a0) (0xc000a2a5a0) Stream added, broadcasting: 1\nI0501 16:12:04.790638 2813 log.go:172] (0xc000af89a0) Reply frame received for 1\nI0501 16:12:04.790693 2813 log.go:172] (0xc000af89a0) (0xc000521680) Create stream\nI0501 16:12:04.790709 2813 log.go:172] (0xc000af89a0) (0xc000521680) Stream added, broadcasting: 3\nI0501 16:12:04.791628 2813 log.go:172] (0xc000af89a0) Reply frame received for 3\nI0501 16:12:04.791662 2813 log.go:172] (0xc000af89a0) (0xc000404aa0) Create stream\nI0501 16:12:04.791676 2813 log.go:172] (0xc000af89a0) (0xc000404aa0) Stream added, broadcasting: 5\nI0501 16:12:04.792583 2813 log.go:172] (0xc000af89a0) Reply frame received for 5\nI0501 16:12:04.893788 2813 log.go:172] (0xc000af89a0) Data frame received for 3\nI0501 16:12:04.893814 2813 log.go:172] (0xc000521680) (3) Data frame handling\nI0501 16:12:04.893829 2813 log.go:172] (0xc000af89a0) Data frame received for 5\nI0501 16:12:04.893834 2813 log.go:172] (0xc000404aa0) (5) Data frame handling\nI0501 16:12:04.893840 2813 log.go:172] (0xc000404aa0) (5) Data frame sent\nI0501 16:12:04.893846 2813 log.go:172] (0xc000af89a0) Data frame received for 5\nI0501 16:12:04.893850 2813 log.go:172] (0xc000404aa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 31711\nConnection to 172.17.0.15 31711 port [tcp/31711] succeeded!\nI0501 16:12:04.894969 2813 log.go:172] (0xc000af89a0) Data frame received for 1\nI0501 16:12:04.895001 2813 log.go:172] (0xc000a2a5a0) (1) Data frame handling\nI0501 16:12:04.895020 2813 log.go:172] (0xc000a2a5a0) (1) Data frame sent\nI0501 16:12:04.895119 2813 log.go:172] (0xc000af89a0) (0xc000a2a5a0) Stream removed, broadcasting: 1\nI0501 16:12:04.895531 2813 log.go:172] (0xc000af89a0) (0xc000a2a5a0) Stream removed, broadcasting: 1\nI0501 16:12:04.895560 2813 log.go:172] (0xc000af89a0) (0xc000521680) Stream removed, broadcasting: 3\nI0501 16:12:04.895708 2813 log.go:172] (0xc000af89a0) (0xc000404aa0) Stream removed, broadcasting: 5\n" May 1 16:12:04.900: INFO: stdout: "" May 1 16:12:04.900: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9133 execpodlx8xv -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 31711' May 1 16:12:05.404: INFO: stderr: "I0501 16:12:05.323385 2835 log.go:172] (0xc000a2c580) (0xc0009861e0) Create stream\nI0501 16:12:05.323453 2835 log.go:172] (0xc000a2c580) (0xc0009861e0) Stream added, broadcasting: 1\nI0501 16:12:05.327873 2835 log.go:172] (0xc000a2c580) Reply frame received for 1\nI0501 16:12:05.327920 2835 log.go:172] (0xc000a2c580) (0xc000595680) Create stream\nI0501 16:12:05.327934 2835 log.go:172] (0xc000a2c580) (0xc000595680) Stream added, broadcasting: 3\nI0501 16:12:05.330601 2835 log.go:172] (0xc000a2c580) Reply frame received for 3\nI0501 16:12:05.330647 2835 log.go:172] (0xc000a2c580) (0xc000450aa0) Create stream\nI0501 16:12:05.330659 2835 log.go:172] (0xc000a2c580) (0xc000450aa0) Stream added, broadcasting: 5\nI0501 16:12:05.331763 2835 log.go:172] (0xc000a2c580) Reply frame received for 5\nI0501 16:12:05.397802 2835 log.go:172] (0xc000a2c580) Data frame received for 5\nI0501 16:12:05.397843 2835 log.go:172] (0xc000450aa0) (5) Data frame handling\nI0501 16:12:05.397867 2835 log.go:172] (0xc000450aa0) (5) Data frame sent\nI0501 16:12:05.397880 2835 log.go:172] (0xc000a2c580) Data frame received for 5\nI0501 16:12:05.397893 2835 log.go:172] (0xc000450aa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 31711\nConnection to 172.17.0.18 31711 port [tcp/31711] succeeded!\nI0501 16:12:05.397961 2835 log.go:172] (0xc000a2c580) Data frame received for 3\nI0501 16:12:05.397996 2835 log.go:172] (0xc000595680) (3) Data frame handling\nI0501 16:12:05.399213 2835 log.go:172] (0xc000a2c580) Data frame received for 1\nI0501 16:12:05.399229 2835 log.go:172] (0xc0009861e0) (1) Data frame handling\nI0501 16:12:05.399255 2835 log.go:172] (0xc0009861e0) (1) Data frame sent\nI0501 16:12:05.399276 2835 log.go:172] (0xc000a2c580) (0xc0009861e0) Stream removed, broadcasting: 1\nI0501 16:12:05.399302 2835 log.go:172] (0xc000a2c580) Go away received\nI0501 16:12:05.399648 2835 log.go:172] (0xc000a2c580) (0xc0009861e0) Stream removed, broadcasting: 1\nI0501 16:12:05.399671 2835 log.go:172] (0xc000a2c580) (0xc000595680) Stream removed, broadcasting: 3\nI0501 16:12:05.399682 2835 log.go:172] (0xc000a2c580) (0xc000450aa0) Stream removed, broadcasting: 5\n" May 1 16:12:05.404: INFO: stdout: "" May 1 16:12:05.404: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:12:06.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9133" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:15.982 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":165,"skipped":2908,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:12:06.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs May 1 16:12:06.907: INFO: Waiting up to 5m0s for pod "pod-997d3892-227c-497d-a7e5-020844a71762" in namespace "emptydir-7287" to be "Succeeded or Failed" May 1 16:12:07.084: INFO: Pod "pod-997d3892-227c-497d-a7e5-020844a71762": Phase="Pending", Reason="", readiness=false. Elapsed: 176.611268ms May 1 16:12:09.396: INFO: Pod "pod-997d3892-227c-497d-a7e5-020844a71762": Phase="Pending", Reason="", readiness=false. Elapsed: 2.488398246s May 1 16:12:11.400: INFO: Pod "pod-997d3892-227c-497d-a7e5-020844a71762": Phase="Pending", Reason="", readiness=false. Elapsed: 4.493093312s May 1 16:12:13.677: INFO: Pod "pod-997d3892-227c-497d-a7e5-020844a71762": Phase="Running", Reason="", readiness=true. Elapsed: 6.769744203s May 1 16:12:15.709: INFO: Pod "pod-997d3892-227c-497d-a7e5-020844a71762": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.801937821s STEP: Saw pod success May 1 16:12:15.709: INFO: Pod "pod-997d3892-227c-497d-a7e5-020844a71762" satisfied condition "Succeeded or Failed" May 1 16:12:15.732: INFO: Trying to get logs from node kali-worker2 pod pod-997d3892-227c-497d-a7e5-020844a71762 container test-container: STEP: delete the pod May 1 16:12:16.697: INFO: Waiting for pod pod-997d3892-227c-497d-a7e5-020844a71762 to disappear May 1 16:12:16.832: INFO: Pod pod-997d3892-227c-497d-a7e5-020844a71762 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 1 16:12:16.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7287" for this suite. • [SLOW TEST:10.414 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":166,"skipped":2927,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 1 16:12:16.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 1 16:12:17.026: INFO: (0) /api/v1/nodes/kali-worker2:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  1 16:12:20.171: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  1 16:12:22.278: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946340, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946340, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946341, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946339, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:12:24.546: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946340, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946340, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946341, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946339, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:12:26.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946340, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946340, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946341, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946339, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  1 16:12:29.342: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:12:29.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7770" for this suite.
STEP: Destroying namespace "webhook-7770-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.894 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":168,"skipped":2977,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:12:29.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
May  1 16:12:35.443: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:12:36.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1454" for this suite.

• [SLOW TEST:6.738 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":169,"skipped":3001,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:12:36.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0501 16:12:38.470268       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May  1 16:12:38.470: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:12:38.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8975" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":170,"skipped":3003,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:12:38.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  1 16:12:39.241: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:12:47.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8830" for this suite.

• [SLOW TEST:9.178 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":275,"completed":171,"skipped":3004,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:12:47.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-8184
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-8184
STEP: Deleting pre-stop pod
May  1 16:13:03.349: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:13:03.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-8184" for this suite.

• [SLOW TEST:15.826 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":275,"completed":172,"skipped":3045,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:13:03.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
May  1 16:13:08.118: INFO: &Pod{ObjectMeta:{send-events-f6fa9f97-bf6b-4881-acc4-8509044b1ac5  events-5134 /api/v1/namespaces/events-5134/pods/send-events-f6fa9f97-bf6b-4881-acc4-8509044b1ac5 b6c96b0e-8e73-41c4-aed7-560338ef5ca5 668111 0 2020-05-01 16:13:03 +0000 UTC   map[name:foo time:924535589] map[] [] []  [{e2e.test Update v1 2020-05-01 16:13:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 16:13:07 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9wjrv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9wjrv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9wjrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:13:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:13:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:13:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:13:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.23,StartTime:2020-05-01 16:13:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 16:13:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://b9f2327422fc372b1199e7d5343259920eb0ab3fc2dbc02f9d7b46faff1ba018,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
May  1 16:13:10.123: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
May  1 16:13:12.128: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:13:12.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5134" for this suite.

• [SLOW TEST:8.736 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":173,"skipped":3063,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:13:12.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  1 16:13:12.312: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1fd2f700-a3dd-4ee4-9909-adec6bfe5c28" in namespace "projected-7023" to be "Succeeded or Failed"
May  1 16:13:12.372: INFO: Pod "downwardapi-volume-1fd2f700-a3dd-4ee4-9909-adec6bfe5c28": Phase="Pending", Reason="", readiness=false. Elapsed: 60.085332ms
May  1 16:13:14.376: INFO: Pod "downwardapi-volume-1fd2f700-a3dd-4ee4-9909-adec6bfe5c28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064554058s
May  1 16:13:16.381: INFO: Pod "downwardapi-volume-1fd2f700-a3dd-4ee4-9909-adec6bfe5c28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068969501s
May  1 16:13:18.408: INFO: Pod "downwardapi-volume-1fd2f700-a3dd-4ee4-9909-adec6bfe5c28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.096330709s
STEP: Saw pod success
May  1 16:13:18.408: INFO: Pod "downwardapi-volume-1fd2f700-a3dd-4ee4-9909-adec6bfe5c28" satisfied condition "Succeeded or Failed"
May  1 16:13:18.411: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-1fd2f700-a3dd-4ee4-9909-adec6bfe5c28 container client-container: 
STEP: delete the pod
May  1 16:13:18.492: INFO: Waiting for pod downwardapi-volume-1fd2f700-a3dd-4ee4-9909-adec6bfe5c28 to disappear
May  1 16:13:18.506: INFO: Pod downwardapi-volume-1fd2f700-a3dd-4ee4-9909-adec6bfe5c28 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:13:18.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7023" for this suite.

• [SLOW TEST:6.386 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":3063,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:13:18.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  1 16:13:21.070: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  1 16:13:23.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946401, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946401, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946401, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946400, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:13:25.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946401, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946401, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946401, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946400, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  1 16:13:28.114: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:13:29.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-702" for this suite.
STEP: Destroying namespace "webhook-702-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.903 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":175,"skipped":3075,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:13:30.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
May  1 16:13:31.292: INFO: Created pod &Pod{ObjectMeta:{dns-1910  dns-1910 /api/v1/namespaces/dns-1910/pods/dns-1910 bc518468-c1f5-451f-aec7-2b9efe0a626f 668298 0 2020-05-01 16:13:31 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2020-05-01 16:13:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gtlf8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gtlf8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gtlf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  1 16:13:31.465: INFO: The status of Pod dns-1910 is Pending, waiting for it to be Running (with Ready = true)
May  1 16:13:33.470: INFO: The status of Pod dns-1910 is Pending, waiting for it to be Running (with Ready = true)
May  1 16:13:35.468: INFO: The status of Pod dns-1910 is Pending, waiting for it to be Running (with Ready = true)
May  1 16:13:37.468: INFO: The status of Pod dns-1910 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
May  1 16:13:37.468: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1910 PodName:dns-1910 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 16:13:37.468: INFO: >>> kubeConfig: /root/.kube/config
I0501 16:13:37.496339       7 log.go:172] (0xc002b1e580) (0xc00110f9a0) Create stream
I0501 16:13:37.496378       7 log.go:172] (0xc002b1e580) (0xc00110f9a0) Stream added, broadcasting: 1
I0501 16:13:37.499450       7 log.go:172] (0xc002b1e580) Reply frame received for 1
I0501 16:13:37.499479       7 log.go:172] (0xc002b1e580) (0xc00110fae0) Create stream
I0501 16:13:37.499488       7 log.go:172] (0xc002b1e580) (0xc00110fae0) Stream added, broadcasting: 3
I0501 16:13:37.502466       7 log.go:172] (0xc002b1e580) Reply frame received for 3
I0501 16:13:37.502512       7 log.go:172] (0xc002b1e580) (0xc000c039a0) Create stream
I0501 16:13:37.502533       7 log.go:172] (0xc002b1e580) (0xc000c039a0) Stream added, broadcasting: 5
I0501 16:13:37.503571       7 log.go:172] (0xc002b1e580) Reply frame received for 5
I0501 16:13:37.559371       7 log.go:172] (0xc002b1e580) Data frame received for 3
I0501 16:13:37.559396       7 log.go:172] (0xc00110fae0) (3) Data frame handling
I0501 16:13:37.559412       7 log.go:172] (0xc00110fae0) (3) Data frame sent
I0501 16:13:37.560210       7 log.go:172] (0xc002b1e580) Data frame received for 3
I0501 16:13:37.560258       7 log.go:172] (0xc00110fae0) (3) Data frame handling
I0501 16:13:37.560301       7 log.go:172] (0xc002b1e580) Data frame received for 5
I0501 16:13:37.560321       7 log.go:172] (0xc000c039a0) (5) Data frame handling
I0501 16:13:37.561707       7 log.go:172] (0xc002b1e580) Data frame received for 1
I0501 16:13:37.561727       7 log.go:172] (0xc00110f9a0) (1) Data frame handling
I0501 16:13:37.561739       7 log.go:172] (0xc00110f9a0) (1) Data frame sent
I0501 16:13:37.561870       7 log.go:172] (0xc002b1e580) (0xc00110f9a0) Stream removed, broadcasting: 1
I0501 16:13:37.561917       7 log.go:172] (0xc002b1e580) Go away received
I0501 16:13:37.562217       7 log.go:172] (0xc002b1e580) (0xc00110f9a0) Stream removed, broadcasting: 1
I0501 16:13:37.562234       7 log.go:172] (0xc002b1e580) (0xc00110fae0) Stream removed, broadcasting: 3
I0501 16:13:37.562247       7 log.go:172] (0xc002b1e580) (0xc000c039a0) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
May  1 16:13:37.562: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1910 PodName:dns-1910 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 16:13:37.562: INFO: >>> kubeConfig: /root/.kube/config
I0501 16:13:37.594955       7 log.go:172] (0xc002b1ebb0) (0xc000195680) Create stream
I0501 16:13:37.594982       7 log.go:172] (0xc002b1ebb0) (0xc000195680) Stream added, broadcasting: 1
I0501 16:13:37.596834       7 log.go:172] (0xc002b1ebb0) Reply frame received for 1
I0501 16:13:37.596875       7 log.go:172] (0xc002b1ebb0) (0xc0016ac000) Create stream
I0501 16:13:37.596891       7 log.go:172] (0xc002b1ebb0) (0xc0016ac000) Stream added, broadcasting: 3
I0501 16:13:37.598277       7 log.go:172] (0xc002b1ebb0) Reply frame received for 3
I0501 16:13:37.598344       7 log.go:172] (0xc002b1ebb0) (0xc000b80500) Create stream
I0501 16:13:37.598375       7 log.go:172] (0xc002b1ebb0) (0xc000b80500) Stream added, broadcasting: 5
I0501 16:13:37.599232       7 log.go:172] (0xc002b1ebb0) Reply frame received for 5
I0501 16:13:37.670355       7 log.go:172] (0xc002b1ebb0) Data frame received for 3
I0501 16:13:37.670404       7 log.go:172] (0xc0016ac000) (3) Data frame handling
I0501 16:13:37.670421       7 log.go:172] (0xc0016ac000) (3) Data frame sent
I0501 16:13:37.674201       7 log.go:172] (0xc002b1ebb0) Data frame received for 3
I0501 16:13:37.674237       7 log.go:172] (0xc0016ac000) (3) Data frame handling
I0501 16:13:37.674266       7 log.go:172] (0xc002b1ebb0) Data frame received for 5
I0501 16:13:37.674283       7 log.go:172] (0xc000b80500) (5) Data frame handling
I0501 16:13:37.674848       7 log.go:172] (0xc002b1ebb0) Data frame received for 1
I0501 16:13:37.674917       7 log.go:172] (0xc000195680) (1) Data frame handling
I0501 16:13:37.674984       7 log.go:172] (0xc000195680) (1) Data frame sent
I0501 16:13:37.675003       7 log.go:172] (0xc002b1ebb0) (0xc000195680) Stream removed, broadcasting: 1
I0501 16:13:37.675019       7 log.go:172] (0xc002b1ebb0) Go away received
I0501 16:13:37.675258       7 log.go:172] (0xc002b1ebb0) (0xc000195680) Stream removed, broadcasting: 1
I0501 16:13:37.675276       7 log.go:172] (0xc002b1ebb0) (0xc0016ac000) Stream removed, broadcasting: 3
I0501 16:13:37.675288       7 log.go:172] (0xc002b1ebb0) (0xc000b80500) Stream removed, broadcasting: 5
May  1 16:13:37.675: INFO: Deleting pod dns-1910...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:13:37.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1910" for this suite.

• [SLOW TEST:7.407 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":176,"skipped":3082,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:13:37.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-f1102d02-60d3-4090-ae5c-c6b98c892644
STEP: Creating a pod to test consume configMaps
May  1 16:13:39.348: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-19999e61-fd5f-43eb-bc9b-dee9b3054e5e" in namespace "projected-268" to be "Succeeded or Failed"
May  1 16:13:39.435: INFO: Pod "pod-projected-configmaps-19999e61-fd5f-43eb-bc9b-dee9b3054e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 86.609695ms
May  1 16:13:41.636: INFO: Pod "pod-projected-configmaps-19999e61-fd5f-43eb-bc9b-dee9b3054e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287843183s
May  1 16:13:43.651: INFO: Pod "pod-projected-configmaps-19999e61-fd5f-43eb-bc9b-dee9b3054e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.302588115s
May  1 16:13:46.230: INFO: Pod "pod-projected-configmaps-19999e61-fd5f-43eb-bc9b-dee9b3054e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.881329719s
May  1 16:13:48.283: INFO: Pod "pod-projected-configmaps-19999e61-fd5f-43eb-bc9b-dee9b3054e5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.934535233s
STEP: Saw pod success
May  1 16:13:48.283: INFO: Pod "pod-projected-configmaps-19999e61-fd5f-43eb-bc9b-dee9b3054e5e" satisfied condition "Succeeded or Failed"
May  1 16:13:48.286: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-19999e61-fd5f-43eb-bc9b-dee9b3054e5e container projected-configmap-volume-test: 
STEP: delete the pod
May  1 16:13:49.014: INFO: Waiting for pod pod-projected-configmaps-19999e61-fd5f-43eb-bc9b-dee9b3054e5e to disappear
May  1 16:13:49.089: INFO: Pod pod-projected-configmaps-19999e61-fd5f-43eb-bc9b-dee9b3054e5e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:13:49.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-268" for this suite.

• [SLOW TEST:11.499 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":177,"skipped":3100,"failed":0}
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:13:49.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:13:52.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7853" for this suite.
STEP: Destroying namespace "nspatchtest-cba038ca-de0d-4a42-b39d-117ae2dff283-9890" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":178,"skipped":3102,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:13:52.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-da2f115e-38a1-470f-839b-e1f4043438c5
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:13:52.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-982" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":179,"skipped":3123,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:13:52.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
May  1 16:13:52.619: INFO: Waiting up to 5m0s for pod "pod-50393722-0487-4e31-ad10-1325c10cefdc" in namespace "emptydir-4290" to be "Succeeded or Failed"
May  1 16:13:52.673: INFO: Pod "pod-50393722-0487-4e31-ad10-1325c10cefdc": Phase="Pending", Reason="", readiness=false. Elapsed: 53.980443ms
May  1 16:13:54.736: INFO: Pod "pod-50393722-0487-4e31-ad10-1325c10cefdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11629534s
May  1 16:13:56.779: INFO: Pod "pod-50393722-0487-4e31-ad10-1325c10cefdc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160016318s
May  1 16:13:59.003: INFO: Pod "pod-50393722-0487-4e31-ad10-1325c10cefdc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.3833289s
May  1 16:14:01.005: INFO: Pod "pod-50393722-0487-4e31-ad10-1325c10cefdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.386056981s
STEP: Saw pod success
May  1 16:14:01.006: INFO: Pod "pod-50393722-0487-4e31-ad10-1325c10cefdc" satisfied condition "Succeeded or Failed"
May  1 16:14:01.008: INFO: Trying to get logs from node kali-worker2 pod pod-50393722-0487-4e31-ad10-1325c10cefdc container test-container: 
STEP: delete the pod
May  1 16:14:01.217: INFO: Waiting for pod pod-50393722-0487-4e31-ad10-1325c10cefdc to disappear
May  1 16:14:01.220: INFO: Pod pod-50393722-0487-4e31-ad10-1325c10cefdc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:14:01.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4290" for this suite.

• [SLOW TEST:8.754 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":3132,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:14:01.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating pod
May  1 16:14:09.538: INFO: Pod pod-hostip-eb5a2302-2af0-47b6-82f7-d1a78a639517 has hostIP: 172.17.0.15
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:14:09.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5151" for this suite.

• [SLOW TEST:8.481 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3135,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:14:09.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name projected-secret-test-aa5d61ff-a653-4c28-bd6a-2f0afd3b041a
STEP: Creating a pod to test consume secrets
May  1 16:14:10.007: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b08124f2-f3d2-4cbd-bf9d-1cead79772b0" in namespace "projected-2625" to be "Succeeded or Failed"
May  1 16:14:10.045: INFO: Pod "pod-projected-secrets-b08124f2-f3d2-4cbd-bf9d-1cead79772b0": Phase="Pending", Reason="", readiness=false. Elapsed: 37.519942ms
May  1 16:14:12.163: INFO: Pod "pod-projected-secrets-b08124f2-f3d2-4cbd-bf9d-1cead79772b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155777868s
May  1 16:14:14.167: INFO: Pod "pod-projected-secrets-b08124f2-f3d2-4cbd-bf9d-1cead79772b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159576031s
May  1 16:14:16.171: INFO: Pod "pod-projected-secrets-b08124f2-f3d2-4cbd-bf9d-1cead79772b0": Phase="Running", Reason="", readiness=true. Elapsed: 6.163080973s
May  1 16:14:18.175: INFO: Pod "pod-projected-secrets-b08124f2-f3d2-4cbd-bf9d-1cead79772b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.167446327s
STEP: Saw pod success
May  1 16:14:18.175: INFO: Pod "pod-projected-secrets-b08124f2-f3d2-4cbd-bf9d-1cead79772b0" satisfied condition "Succeeded or Failed"
May  1 16:14:18.178: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-b08124f2-f3d2-4cbd-bf9d-1cead79772b0 container secret-volume-test: 
STEP: delete the pod
May  1 16:14:18.278: INFO: Waiting for pod pod-projected-secrets-b08124f2-f3d2-4cbd-bf9d-1cead79772b0 to disappear
May  1 16:14:18.286: INFO: Pod pod-projected-secrets-b08124f2-f3d2-4cbd-bf9d-1cead79772b0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:14:18.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2625" for this suite.

• [SLOW TEST:8.587 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3140,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:14:18.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
May  1 16:14:18.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-2729 -- logs-generator --log-lines-total 100 --run-duration 20s'
May  1 16:14:18.447: INFO: stderr: ""
May  1 16:14:18.447: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
May  1 16:14:18.447: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
May  1 16:14:18.447: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2729" to be "running and ready, or succeeded"
May  1 16:14:18.454: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.089583ms
May  1 16:14:20.522: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075263694s
May  1 16:14:22.526: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079124321s
May  1 16:14:24.603: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.155960293s
May  1 16:14:24.603: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
May  1 16:14:24.603: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
May  1 16:14:24.603: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2729'
May  1 16:14:24.730: INFO: stderr: ""
May  1 16:14:24.730: INFO: stdout: "I0501 16:14:23.139032       1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/zgqz 220\nI0501 16:14:23.339137       1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/4td8 329\nI0501 16:14:23.539279       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/hq5 455\nI0501 16:14:23.739216       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/kt6x 467\nI0501 16:14:23.939180       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/q7p 367\nI0501 16:14:24.139143       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/nzx 577\nI0501 16:14:24.339196       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/bf7 381\nI0501 16:14:24.539274       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/rwg4 384\n"
STEP: limiting log lines
May  1 16:14:24.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2729 --tail=1'
May  1 16:14:24.836: INFO: stderr: ""
May  1 16:14:24.836: INFO: stdout: "I0501 16:14:24.739183       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/vct 470\n"
May  1 16:14:24.836: INFO: got output "I0501 16:14:24.739183       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/vct 470\n"
STEP: limiting log bytes
May  1 16:14:24.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2729 --limit-bytes=1'
May  1 16:14:24.943: INFO: stderr: ""
May  1 16:14:24.943: INFO: stdout: "I"
May  1 16:14:24.943: INFO: got output "I"
STEP: exposing timestamps
May  1 16:14:24.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2729 --tail=1 --timestamps'
May  1 16:14:25.045: INFO: stderr: ""
May  1 16:14:25.045: INFO: stdout: "2020-05-01T16:14:24.93930578Z I0501 16:14:24.939187       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/mvm 367\n"
May  1 16:14:25.045: INFO: got output "2020-05-01T16:14:24.93930578Z I0501 16:14:24.939187       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/mvm 367\n"
STEP: restricting to a time range
May  1 16:14:27.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2729 --since=1s'
May  1 16:14:27.654: INFO: stderr: ""
May  1 16:14:27.654: INFO: stdout: "I0501 16:14:26.739210       1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/l7sl 327\nI0501 16:14:26.939202       1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/wds 555\nI0501 16:14:27.139219       1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/j9f 371\nI0501 16:14:27.339188       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/jtd9 309\nI0501 16:14:27.539195       1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/7l2 289\n"
May  1 16:14:27.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2729 --since=24h'
May  1 16:14:27.760: INFO: stderr: ""
May  1 16:14:27.761: INFO: stdout: "I0501 16:14:23.139032       1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/zgqz 220\nI0501 16:14:23.339137       1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/4td8 329\nI0501 16:14:23.539279       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/hq5 455\nI0501 16:14:23.739216       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/kt6x 467\nI0501 16:14:23.939180       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/q7p 367\nI0501 16:14:24.139143       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/nzx 577\nI0501 16:14:24.339196       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/bf7 381\nI0501 16:14:24.539274       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/rwg4 384\nI0501 16:14:24.739183       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/vct 470\nI0501 16:14:24.939187       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/mvm 367\nI0501 16:14:25.139187       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/l6jf 532\nI0501 16:14:25.339206       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/5sgg 380\nI0501 16:14:25.539156       1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/xphd 326\nI0501 16:14:25.739232       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/tfv 380\nI0501 16:14:25.939168       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/5zz 200\nI0501 16:14:26.139238       1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/8cnb 273\nI0501 16:14:26.339142       1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/6c7 484\nI0501 16:14:26.539178       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/wj7s 355\nI0501 16:14:26.739210       1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/l7sl 327\nI0501 16:14:26.939202       1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/wds 555\nI0501 16:14:27.139219       1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/j9f 371\nI0501 16:14:27.339188       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/jtd9 309\nI0501 16:14:27.539195       1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/7l2 289\nI0501 16:14:27.739179       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/68b 237\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
May  1 16:14:27.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-2729'
May  1 16:14:31.724: INFO: stderr: ""
May  1 16:14:31.724: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:14:31.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2729" for this suite.

• [SLOW TEST:13.435 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":183,"skipped":3171,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:14:31.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:15:10.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2247" for this suite.

• [SLOW TEST:38.517 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":184,"skipped":3177,"failed":0}
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:15:10.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
May  1 16:15:10.573: INFO: Waiting up to 5m0s for pod "pod-cda71842-2b42-42be-8897-e5ab1328c4a6" in namespace "emptydir-8937" to be "Succeeded or Failed"
May  1 16:15:10.587: INFO: Pod "pod-cda71842-2b42-42be-8897-e5ab1328c4a6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.808435ms
May  1 16:15:12.655: INFO: Pod "pod-cda71842-2b42-42be-8897-e5ab1328c4a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081663588s
May  1 16:15:14.870: INFO: Pod "pod-cda71842-2b42-42be-8897-e5ab1328c4a6": Phase="Running", Reason="", readiness=true. Elapsed: 4.296521365s
May  1 16:15:16.978: INFO: Pod "pod-cda71842-2b42-42be-8897-e5ab1328c4a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.405059338s
STEP: Saw pod success
May  1 16:15:16.978: INFO: Pod "pod-cda71842-2b42-42be-8897-e5ab1328c4a6" satisfied condition "Succeeded or Failed"
May  1 16:15:17.052: INFO: Trying to get logs from node kali-worker pod pod-cda71842-2b42-42be-8897-e5ab1328c4a6 container test-container: 
STEP: delete the pod
May  1 16:15:17.906: INFO: Waiting for pod pod-cda71842-2b42-42be-8897-e5ab1328c4a6 to disappear
May  1 16:15:18.087: INFO: Pod pod-cda71842-2b42-42be-8897-e5ab1328c4a6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:15:18.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8937" for this suite.

• [SLOW TEST:8.538 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3177,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:15:18.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
May  1 16:15:19.565: INFO: >>> kubeConfig: /root/.kube/config
May  1 16:15:22.556: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:15:33.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3902" for this suite.

• [SLOW TEST:15.307 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":186,"skipped":3192,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:15:34.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  1 16:15:34.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:15:42.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2266" for this suite.

• [SLOW TEST:8.746 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3224,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:15:42.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2858.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2858.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2858.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2858.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2858.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2858.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  1 16:15:55.441: INFO: DNS probes using dns-2858/dns-test-c9adc2ae-a7b6-4e44-80c2-942e387961eb succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:15:55.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2858" for this suite.

• [SLOW TEST:12.811 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":188,"skipped":3227,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:15:55.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
May  1 16:15:56.305: INFO: Waiting up to 5m0s for pod "pod-710296aa-e968-41d1-990a-661d535c01d6" in namespace "emptydir-7343" to be "Succeeded or Failed"
May  1 16:15:56.628: INFO: Pod "pod-710296aa-e968-41d1-990a-661d535c01d6": Phase="Pending", Reason="", readiness=false. Elapsed: 323.028413ms
May  1 16:15:58.726: INFO: Pod "pod-710296aa-e968-41d1-990a-661d535c01d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.42150815s
May  1 16:16:00.799: INFO: Pod "pod-710296aa-e968-41d1-990a-661d535c01d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.494291255s
May  1 16:16:03.003: INFO: Pod "pod-710296aa-e968-41d1-990a-661d535c01d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.697750855s
STEP: Saw pod success
May  1 16:16:03.003: INFO: Pod "pod-710296aa-e968-41d1-990a-661d535c01d6" satisfied condition "Succeeded or Failed"
May  1 16:16:03.307: INFO: Trying to get logs from node kali-worker2 pod pod-710296aa-e968-41d1-990a-661d535c01d6 container test-container: 
STEP: delete the pod
May  1 16:16:04.054: INFO: Waiting for pod pod-710296aa-e968-41d1-990a-661d535c01d6 to disappear
May  1 16:16:04.067: INFO: Pod pod-710296aa-e968-41d1-990a-661d535c01d6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:16:04.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7343" for this suite.

• [SLOW TEST:8.477 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3250,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:16:04.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-d3d091f8-1414-4a97-bb00-b6cfe483bf42
STEP: Creating a pod to test consume configMaps
May  1 16:16:04.829: INFO: Waiting up to 5m0s for pod "pod-configmaps-598c1ecd-19c2-4190-b7a1-759d8e66317a" in namespace "configmap-949" to be "Succeeded or Failed"
May  1 16:16:05.122: INFO: Pod "pod-configmaps-598c1ecd-19c2-4190-b7a1-759d8e66317a": Phase="Pending", Reason="", readiness=false. Elapsed: 292.514725ms
May  1 16:16:07.547: INFO: Pod "pod-configmaps-598c1ecd-19c2-4190-b7a1-759d8e66317a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.71791015s
May  1 16:16:09.698: INFO: Pod "pod-configmaps-598c1ecd-19c2-4190-b7a1-759d8e66317a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.868555493s
May  1 16:16:11.870: INFO: Pod "pod-configmaps-598c1ecd-19c2-4190-b7a1-759d8e66317a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.041009434s
May  1 16:16:14.128: INFO: Pod "pod-configmaps-598c1ecd-19c2-4190-b7a1-759d8e66317a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.298893909s
May  1 16:16:16.372: INFO: Pod "pod-configmaps-598c1ecd-19c2-4190-b7a1-759d8e66317a": Phase="Running", Reason="", readiness=true. Elapsed: 11.542527579s
May  1 16:16:18.376: INFO: Pod "pod-configmaps-598c1ecd-19c2-4190-b7a1-759d8e66317a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.546519799s
STEP: Saw pod success
May  1 16:16:18.376: INFO: Pod "pod-configmaps-598c1ecd-19c2-4190-b7a1-759d8e66317a" satisfied condition "Succeeded or Failed"
May  1 16:16:18.379: INFO: Trying to get logs from node kali-worker pod pod-configmaps-598c1ecd-19c2-4190-b7a1-759d8e66317a container configmap-volume-test: 
STEP: delete the pod
May  1 16:16:18.448: INFO: Waiting for pod pod-configmaps-598c1ecd-19c2-4190-b7a1-759d8e66317a to disappear
May  1 16:16:18.481: INFO: Pod pod-configmaps-598c1ecd-19c2-4190-b7a1-759d8e66317a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:16:18.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-949" for this suite.

• [SLOW TEST:14.361 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3286,"failed":0}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:16:18.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May  1 16:16:19.028: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May  1 16:16:19.261: INFO: Waiting for terminating namespaces to be deleted...
May  1 16:16:19.268: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May  1 16:16:19.377: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  1 16:16:19.377: INFO: 	Container kindnet-cni ready: true, restart count 1
May  1 16:16:19.377: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  1 16:16:19.377: INFO: 	Container kube-proxy ready: true, restart count 0
May  1 16:16:19.377: INFO: pod-exec-websocket-de1b8905-1d9e-43c4-9484-3e0406afb1cb from pods-2266 started at 2020-05-01 16:15:34 +0000 UTC (1 container statuses recorded)
May  1 16:16:19.377: INFO: 	Container main ready: true, restart count 0
May  1 16:16:19.377: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May  1 16:16:19.383: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  1 16:16:19.383: INFO: 	Container kindnet-cni ready: true, restart count 0
May  1 16:16:19.383: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  1 16:16:19.383: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-55f8ca52-31b8-4654-9c91-bd5ac26d775f 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-55f8ca52-31b8-4654-9c91-bd5ac26d775f off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-55f8ca52-31b8-4654-9c91-bd5ac26d775f
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:16:38.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1785" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:19.863 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":191,"skipped":3294,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:16:38.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:16:45.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7199" for this suite.

• [SLOW TEST:8.096 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":192,"skipped":3343,"failed":0}
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:16:46.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
May  1 16:17:07.744: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  1 16:17:07.835: INFO: Pod pod-with-poststart-http-hook still exists
May  1 16:17:09.835: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  1 16:17:09.962: INFO: Pod pod-with-poststart-http-hook still exists
May  1 16:17:11.835: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  1 16:17:12.166: INFO: Pod pod-with-poststart-http-hook still exists
May  1 16:17:13.835: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  1 16:17:13.839: INFO: Pod pod-with-poststart-http-hook still exists
May  1 16:17:15.835: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  1 16:17:15.840: INFO: Pod pod-with-poststart-http-hook still exists
May  1 16:17:17.835: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  1 16:17:17.841: INFO: Pod pod-with-poststart-http-hook still exists
May  1 16:17:19.835: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  1 16:17:19.840: INFO: Pod pod-with-poststart-http-hook still exists
May  1 16:17:21.835: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  1 16:17:21.839: INFO: Pod pod-with-poststart-http-hook still exists
May  1 16:17:23.835: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  1 16:17:23.839: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:17:23.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4187" for this suite.

• [SLOW TEST:37.457 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":193,"skipped":3346,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:17:23.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:17:24.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-229" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":194,"skipped":3352,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:17:24.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:17:29.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9365" for this suite.

• [SLOW TEST:6.144 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":195,"skipped":3368,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:17:30.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
May  1 16:17:30.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8718'
May  1 16:17:36.257: INFO: stderr: ""
May  1 16:17:36.257: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
May  1 16:17:37.260: INFO: Selector matched 1 pods for map[app:agnhost]
May  1 16:17:37.260: INFO: Found 0 / 1
May  1 16:17:38.262: INFO: Selector matched 1 pods for map[app:agnhost]
May  1 16:17:38.263: INFO: Found 0 / 1
May  1 16:17:39.261: INFO: Selector matched 1 pods for map[app:agnhost]
May  1 16:17:39.261: INFO: Found 0 / 1
May  1 16:17:40.261: INFO: Selector matched 1 pods for map[app:agnhost]
May  1 16:17:40.261: INFO: Found 1 / 1
May  1 16:17:40.261: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
May  1 16:17:40.265: INFO: Selector matched 1 pods for map[app:agnhost]
May  1 16:17:40.265: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May  1 16:17:40.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config patch pod agnhost-master-nvgd4 --namespace=kubectl-8718 -p {"metadata":{"annotations":{"x":"y"}}}'
May  1 16:17:40.368: INFO: stderr: ""
May  1 16:17:40.368: INFO: stdout: "pod/agnhost-master-nvgd4 patched\n"
STEP: checking annotations
May  1 16:17:40.391: INFO: Selector matched 1 pods for map[app:agnhost]
May  1 16:17:40.391: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:17:40.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8718" for this suite.

• [SLOW TEST:10.144 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":275,"completed":196,"skipped":3370,"failed":0}
SSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:17:40.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  1 16:17:40.509: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
May  1 16:17:40.542: INFO: Pod name sample-pod: Found 0 pods out of 1
May  1 16:17:45.556: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May  1 16:17:45.556: INFO: Creating deployment "test-rolling-update-deployment"
May  1 16:17:45.586: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
May  1 16:17:45.691: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
May  1 16:17:47.700: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
May  1 16:17:47.703: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946665, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946665, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946665, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946665, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:17:49.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946665, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946665, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946665, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723946665, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:17:51.717: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May  1 16:17:51.727: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-5740 /apis/apps/v1/namespaces/deployment-5740/deployments/test-rolling-update-deployment 32a87aa2-9faf-4288-9cd9-5fc8d7760b84 669756 1 2020-05-01 16:17:45 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2020-05-01 16:17:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-01 16:17:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004800758  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-01 16:17:45 +0000 UTC,LastTransitionTime:2020-05-01 16:17:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-05-01 16:17:51 +0000 UTC,LastTransitionTime:2020-05-01 16:17:45 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

May  1 16:17:51.730: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-5740 /apis/apps/v1/namespaces/deployment-5740/replicasets/test-rolling-update-deployment-59d5cb45c7 e196dae0-b823-4408-bfb9-f09a73b4123a 669743 1 2020-05-01 16:17:45 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 32a87aa2-9faf-4288-9cd9-5fc8d7760b84 0xc004800c97 0xc004800c98}] []  [{kube-controller-manager Update apps/v1 2020-05-01 16:17:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 50 97 56 55 97 97 50 45 57 102 97 102 45 52 50 56 56 45 57 99 100 57 45 53 102 99 56 100 55 55 54 48 98 56 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004800d28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May  1 16:17:51.730: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
May  1 16:17:51.731: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-5740 /apis/apps/v1/namespaces/deployment-5740/replicasets/test-rolling-update-controller 8a20566d-8c3d-4f6b-84c8-c4219b6002a5 669755 2 2020-05-01 16:17:40 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 32a87aa2-9faf-4288-9cd9-5fc8d7760b84 0xc004800b87 0xc004800b88}] []  [{e2e.test Update apps/v1 2020-05-01 16:17:40 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-01 16:17:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 50 97 56 55 97 97 50 45 57 102 97 102 45 52 50 56 56 45 57 99 100 57 45 53 102 99 56 100 55 55 54 48 98 56 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004800c28  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  1 16:17:51.734: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-tsj8b" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-tsj8b test-rolling-update-deployment-59d5cb45c7- deployment-5740 /api/v1/namespaces/deployment-5740/pods/test-rolling-update-deployment-59d5cb45c7-tsj8b ed23eb96-ce71-4fc9-a5f7-312637ce3057 669742 0 2020-05-01 16:17:45 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 e196dae0-b823-4408-bfb9-f09a73b4123a 0xc0048011f7 0xc0048011f8}] []  [{kube-controller-manager Update v1 2020-05-01 16:17:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 49 57 54 100 97 101 48 45 98 56 50 51 45 52 52 48 56 45 98 102 98 57 45 102 48 57 97 55 51 98 52 49 50 51 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 16:17:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 54 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mrpxw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mrpxw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mrpxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:17:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:17:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:17:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:17:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.60,StartTime:2020-05-01 16:17:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 16:17:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://878043b97fdfdb832d41e2e4aa51eca27839c3bf6186cdd997dc8aaaa57f03c1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:17:51.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5740" for this suite.

• [SLOW TEST:11.340 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":197,"skipped":3373,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:17:51.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May  1 16:17:52.161: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May  1 16:17:52.194: INFO: Waiting for terminating namespaces to be deleted...
May  1 16:17:52.202: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May  1 16:17:52.219: INFO: test-rolling-update-deployment-59d5cb45c7-tsj8b from deployment-5740 started at 2020-05-01 16:17:45 +0000 UTC (1 container statuses recorded)
May  1 16:17:52.219: INFO: 	Container agnhost ready: true, restart count 0
May  1 16:17:52.219: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  1 16:17:52.219: INFO: 	Container kindnet-cni ready: true, restart count 1
May  1 16:17:52.219: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  1 16:17:52.219: INFO: 	Container kube-proxy ready: true, restart count 0
May  1 16:17:52.219: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May  1 16:17:52.225: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  1 16:17:52.225: INFO: 	Container kube-proxy ready: true, restart count 0
May  1 16:17:52.225: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  1 16:17:52.225: INFO: 	Container kindnet-cni ready: true, restart count 0
May  1 16:17:52.225: INFO: agnhost-master-nvgd4 from kubectl-8718 started at 2020-05-01 16:17:36 +0000 UTC (1 container statuses recorded)
May  1 16:17:52.225: INFO: 	Container agnhost-master ready: false, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-29eb7335-a761-45f4-8ada-6149e5be3753 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-29eb7335-a761-45f4-8ada-6149e5be3753 off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-29eb7335-a761-45f4-8ada-6149e5be3753
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:23:06.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4256" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:314.732 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":198,"skipped":3397,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:23:06.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:24:06.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5645" for this suite.

• [SLOW TEST:60.193 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3398,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:24:06.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9520
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-9520
STEP: creating replication controller externalsvc in namespace services-9520
I0501 16:24:06.891805       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9520, replica count: 2
I0501 16:24:09.942394       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 16:24:12.942684       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 16:24:15.942871       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
May  1 16:24:16.283: INFO: Creating new exec pod
May  1 16:24:24.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-9520 execpodn5rwl -- /bin/sh -x -c nslookup clusterip-service'
May  1 16:24:24.687: INFO: stderr: "I0501 16:24:24.603292    3067 log.go:172] (0xc0009880b0) (0xc0007d3540) Create stream\nI0501 16:24:24.603334    3067 log.go:172] (0xc0009880b0) (0xc0007d3540) Stream added, broadcasting: 1\nI0501 16:24:24.605827    3067 log.go:172] (0xc0009880b0) Reply frame received for 1\nI0501 16:24:24.605893    3067 log.go:172] (0xc0009880b0) (0xc0004e0000) Create stream\nI0501 16:24:24.605912    3067 log.go:172] (0xc0009880b0) (0xc0004e0000) Stream added, broadcasting: 3\nI0501 16:24:24.606872    3067 log.go:172] (0xc0009880b0) Reply frame received for 3\nI0501 16:24:24.606939    3067 log.go:172] (0xc0009880b0) (0xc0004e4000) Create stream\nI0501 16:24:24.606967    3067 log.go:172] (0xc0009880b0) (0xc0004e4000) Stream added, broadcasting: 5\nI0501 16:24:24.607864    3067 log.go:172] (0xc0009880b0) Reply frame received for 5\nI0501 16:24:24.673526    3067 log.go:172] (0xc0009880b0) Data frame received for 5\nI0501 16:24:24.673560    3067 log.go:172] (0xc0004e4000) (5) Data frame handling\nI0501 16:24:24.673573    3067 log.go:172] (0xc0004e4000) (5) Data frame sent\n+ nslookup clusterip-service\nI0501 16:24:24.678040    3067 log.go:172] (0xc0009880b0) Data frame received for 3\nI0501 16:24:24.678054    3067 log.go:172] (0xc0004e0000) (3) Data frame handling\nI0501 16:24:24.678061    3067 log.go:172] (0xc0004e0000) (3) Data frame sent\nI0501 16:24:24.679559    3067 log.go:172] (0xc0009880b0) Data frame received for 3\nI0501 16:24:24.679576    3067 log.go:172] (0xc0004e0000) (3) Data frame handling\nI0501 16:24:24.679587    3067 log.go:172] (0xc0004e0000) (3) Data frame sent\nI0501 16:24:24.680454    3067 log.go:172] (0xc0009880b0) Data frame received for 5\nI0501 16:24:24.680498    3067 log.go:172] (0xc0004e4000) (5) Data frame handling\nI0501 16:24:24.680532    3067 log.go:172] (0xc0009880b0) Data frame received for 3\nI0501 16:24:24.680549    3067 log.go:172] (0xc0004e0000) (3) Data frame handling\nI0501 16:24:24.682084    3067 log.go:172] (0xc0009880b0) Data frame received for 1\nI0501 16:24:24.682095    3067 log.go:172] (0xc0007d3540) (1) Data frame handling\nI0501 16:24:24.682115    3067 log.go:172] (0xc0007d3540) (1) Data frame sent\nI0501 16:24:24.682204    3067 log.go:172] (0xc0009880b0) (0xc0007d3540) Stream removed, broadcasting: 1\nI0501 16:24:24.682250    3067 log.go:172] (0xc0009880b0) Go away received\nI0501 16:24:24.682604    3067 log.go:172] (0xc0009880b0) (0xc0007d3540) Stream removed, broadcasting: 1\nI0501 16:24:24.682622    3067 log.go:172] (0xc0009880b0) (0xc0004e0000) Stream removed, broadcasting: 3\nI0501 16:24:24.682632    3067 log.go:172] (0xc0009880b0) (0xc0004e4000) Stream removed, broadcasting: 5\n"
May  1 16:24:24.687: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9520.svc.cluster.local\tcanonical name = externalsvc.services-9520.svc.cluster.local.\nName:\texternalsvc.services-9520.svc.cluster.local\nAddress: 10.102.128.196\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-9520, will wait for the garbage collector to delete the pods
May  1 16:24:24.748: INFO: Deleting ReplicationController externalsvc took: 7.404292ms
May  1 16:24:25.048: INFO: Terminating ReplicationController externalsvc pods took: 300.250287ms
May  1 16:24:33.900: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:24:33.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9520" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:27.264 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":200,"skipped":3418,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:24:33.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-8bff0559-fd00-4120-a248-f7db83fed948
STEP: Creating a pod to test consume configMaps
May  1 16:24:34.069: INFO: Waiting up to 5m0s for pod "pod-configmaps-30ee461e-f36e-46f0-baab-407eae855ae8" in namespace "configmap-8351" to be "Succeeded or Failed"
May  1 16:24:34.074: INFO: Pod "pod-configmaps-30ee461e-f36e-46f0-baab-407eae855ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.689228ms
May  1 16:24:36.095: INFO: Pod "pod-configmaps-30ee461e-f36e-46f0-baab-407eae855ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025425598s
May  1 16:24:38.150: INFO: Pod "pod-configmaps-30ee461e-f36e-46f0-baab-407eae855ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080964031s
May  1 16:24:40.264: INFO: Pod "pod-configmaps-30ee461e-f36e-46f0-baab-407eae855ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.194228726s
May  1 16:24:42.359: INFO: Pod "pod-configmaps-30ee461e-f36e-46f0-baab-407eae855ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.289860432s
May  1 16:24:44.375: INFO: Pod "pod-configmaps-30ee461e-f36e-46f0-baab-407eae855ae8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.305881796s
STEP: Saw pod success
May  1 16:24:44.375: INFO: Pod "pod-configmaps-30ee461e-f36e-46f0-baab-407eae855ae8" satisfied condition "Succeeded or Failed"
May  1 16:24:44.378: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-30ee461e-f36e-46f0-baab-407eae855ae8 container configmap-volume-test: 
STEP: delete the pod
May  1 16:24:44.420: INFO: Waiting for pod pod-configmaps-30ee461e-f36e-46f0-baab-407eae855ae8 to disappear
May  1 16:24:44.551: INFO: Pod pod-configmaps-30ee461e-f36e-46f0-baab-407eae855ae8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:24:44.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8351" for this suite.

• [SLOW TEST:10.673 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3419,"failed":0}
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:24:44.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with configMap that has name projected-configmap-test-upd-894661b9-06a2-41f4-8c27-b6ab5f354d3d
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-894661b9-06a2-41f4-8c27-b6ab5f354d3d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:26:08.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2125" for this suite.

• [SLOW TEST:83.441 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3419,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:26:08.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's command
May  1 16:26:08.278: INFO: Waiting up to 5m0s for pod "var-expansion-55a5abde-440b-4ce6-8efe-2c630e9930ef" in namespace "var-expansion-1662" to be "Succeeded or Failed"
May  1 16:26:08.312: INFO: Pod "var-expansion-55a5abde-440b-4ce6-8efe-2c630e9930ef": Phase="Pending", Reason="", readiness=false. Elapsed: 33.604592ms
May  1 16:26:10.315: INFO: Pod "var-expansion-55a5abde-440b-4ce6-8efe-2c630e9930ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036926874s
May  1 16:26:12.522: INFO: Pod "var-expansion-55a5abde-440b-4ce6-8efe-2c630e9930ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243414982s
May  1 16:26:14.641: INFO: Pod "var-expansion-55a5abde-440b-4ce6-8efe-2c630e9930ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.362643379s
STEP: Saw pod success
May  1 16:26:14.641: INFO: Pod "var-expansion-55a5abde-440b-4ce6-8efe-2c630e9930ef" satisfied condition "Succeeded or Failed"
May  1 16:26:14.644: INFO: Trying to get logs from node kali-worker2 pod var-expansion-55a5abde-440b-4ce6-8efe-2c630e9930ef container dapi-container: 
STEP: delete the pod
May  1 16:26:14.954: INFO: Waiting for pod var-expansion-55a5abde-440b-4ce6-8efe-2c630e9930ef to disappear
May  1 16:26:15.350: INFO: Pod var-expansion-55a5abde-440b-4ce6-8efe-2c630e9930ef no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:26:15.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1662" for this suite.

• [SLOW TEST:7.334 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3432,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:26:15.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  1 16:26:17.146: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  1 16:26:19.155: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947177, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947177, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947177, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947177, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:26:21.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947177, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947177, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947177, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947177, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:26:23.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947177, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947177, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947177, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947177, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  1 16:26:27.000: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:26:28.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1513" for this suite.
STEP: Destroying namespace "webhook-1513-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.747 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":204,"skipped":3494,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:26:29.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May  1 16:26:36.531: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:26:37.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8561" for this suite.

• [SLOW TEST:8.483 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3511,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:26:37.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May  1 16:26:40.618: INFO: PodSpec: initContainers in spec.initContainers
May  1 16:27:42.463: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-211c4267-6e84-4ebe-827e-f6d1aa561baf", GenerateName:"", Namespace:"init-container-7968", SelfLink:"/api/v1/namespaces/init-container-7968/pods/pod-init-211c4267-6e84-4ebe-827e-f6d1aa561baf", UID:"605b0549-fa76-4f09-af0e-5d23cc3f7b60", ResourceVersion:"671781", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723947201, loc:(*time.Location)(0x7b200c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"617972702"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00344c080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00344c0a0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00344c0c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00344c0e0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8hv9k", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc006c70000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8hv9k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8hv9k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8hv9k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002e241b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000a28000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002e242f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002e24310)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002e24318), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002e2431c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947202, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947202, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947202, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947201, loc:(*time.Location)(0x7b200c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.15", PodIP:"10.244.2.66", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.66"}}, StartTime:(*v1.Time)(0xc00344c100), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00344c140), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a28380)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://171b10c26eb4113b22645d24ac938d8c6dd083a8a9bee3191f45cbf724c8f704", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00344c1c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00344c120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002e243ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:27:42.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7968" for this suite.

• [SLOW TEST:65.092 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":206,"skipped":3519,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:27:42.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  1 16:27:43.469: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2de1d6d4-ef50-49d0-b5c2-95143c1259a3" in namespace "projected-7996" to be "Succeeded or Failed"
May  1 16:27:43.501: INFO: Pod "downwardapi-volume-2de1d6d4-ef50-49d0-b5c2-95143c1259a3": Phase="Pending", Reason="", readiness=false. Elapsed: 32.467865ms
May  1 16:27:45.506: INFO: Pod "downwardapi-volume-2de1d6d4-ef50-49d0-b5c2-95143c1259a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036921087s
May  1 16:27:47.702: INFO: Pod "downwardapi-volume-2de1d6d4-ef50-49d0-b5c2-95143c1259a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233273332s
May  1 16:27:50.589: INFO: Pod "downwardapi-volume-2de1d6d4-ef50-49d0-b5c2-95143c1259a3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.120117978s
May  1 16:27:52.792: INFO: Pod "downwardapi-volume-2de1d6d4-ef50-49d0-b5c2-95143c1259a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.322963386s
STEP: Saw pod success
May  1 16:27:52.792: INFO: Pod "downwardapi-volume-2de1d6d4-ef50-49d0-b5c2-95143c1259a3" satisfied condition "Succeeded or Failed"
May  1 16:27:52.795: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-2de1d6d4-ef50-49d0-b5c2-95143c1259a3 container client-container: 
STEP: delete the pod
May  1 16:27:53.608: INFO: Waiting for pod downwardapi-volume-2de1d6d4-ef50-49d0-b5c2-95143c1259a3 to disappear
May  1 16:27:54.002: INFO: Pod downwardapi-volume-2de1d6d4-ef50-49d0-b5c2-95143c1259a3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:27:54.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7996" for this suite.

• [SLOW TEST:11.931 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3521,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:27:54.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6723
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-6723
STEP: Creating statefulset with conflicting port in namespace statefulset-6723
STEP: Waiting until pod test-pod will start running in namespace statefulset-6723
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6723
May  1 16:28:03.729: INFO: Observed stateful pod in namespace: statefulset-6723, name: ss-0, uid: 5cc730ff-8706-49f3-9ed9-cc2add9c2001, status phase: Pending. Waiting for statefulset controller to delete.
May  1 16:28:03.998: INFO: Observed stateful pod in namespace: statefulset-6723, name: ss-0, uid: 5cc730ff-8706-49f3-9ed9-cc2add9c2001, status phase: Failed. Waiting for statefulset controller to delete.
May  1 16:28:04.044: INFO: Observed stateful pod in namespace: statefulset-6723, name: ss-0, uid: 5cc730ff-8706-49f3-9ed9-cc2add9c2001, status phase: Failed. Waiting for statefulset controller to delete.
May  1 16:28:04.147: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6723
STEP: Removing pod with conflicting port in namespace statefulset-6723
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6723 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May  1 16:28:11.024: INFO: Deleting all statefulset in ns statefulset-6723
May  1 16:28:11.027: INFO: Scaling statefulset ss to 0
May  1 16:28:21.136: INFO: Waiting for statefulset status.replicas updated to 0
May  1 16:28:21.139: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:28:21.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6723" for this suite.

• [SLOW TEST:26.560 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":208,"skipped":3547,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:28:21.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7148.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7148.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  1 16:28:30.621: INFO: DNS probes using dns-7148/dns-test-ef896ba3-1d91-425d-becb-ff789abf21cb succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:28:30.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7148" for this suite.

• [SLOW TEST:9.520 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":275,"completed":209,"skipped":3557,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:28:30.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-e2150402-6aad-466b-842c-563969a8ef8e
STEP: Creating a pod to test consume configMaps
May  1 16:28:31.292: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fcaa1ad9-2f5f-49e9-987b-c5a6344fe316" in namespace "projected-8050" to be "Succeeded or Failed"
May  1 16:28:31.308: INFO: Pod "pod-projected-configmaps-fcaa1ad9-2f5f-49e9-987b-c5a6344fe316": Phase="Pending", Reason="", readiness=false. Elapsed: 15.818983ms
May  1 16:28:33.391: INFO: Pod "pod-projected-configmaps-fcaa1ad9-2f5f-49e9-987b-c5a6344fe316": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09880581s
May  1 16:28:35.541: INFO: Pod "pod-projected-configmaps-fcaa1ad9-2f5f-49e9-987b-c5a6344fe316": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248603925s
May  1 16:28:37.769: INFO: Pod "pod-projected-configmaps-fcaa1ad9-2f5f-49e9-987b-c5a6344fe316": Phase="Running", Reason="", readiness=true. Elapsed: 6.476505933s
May  1 16:28:39.918: INFO: Pod "pod-projected-configmaps-fcaa1ad9-2f5f-49e9-987b-c5a6344fe316": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.625835031s
STEP: Saw pod success
May  1 16:28:39.918: INFO: Pod "pod-projected-configmaps-fcaa1ad9-2f5f-49e9-987b-c5a6344fe316" satisfied condition "Succeeded or Failed"
May  1 16:28:39.932: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-fcaa1ad9-2f5f-49e9-987b-c5a6344fe316 container projected-configmap-volume-test: 
STEP: delete the pod
May  1 16:28:40.428: INFO: Waiting for pod pod-projected-configmaps-fcaa1ad9-2f5f-49e9-987b-c5a6344fe316 to disappear
May  1 16:28:40.984: INFO: Pod pod-projected-configmaps-fcaa1ad9-2f5f-49e9-987b-c5a6344fe316 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:28:40.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8050" for this suite.

• [SLOW TEST:10.367 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3574,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:28:41.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  1 16:28:42.014: INFO: The status of Pod test-webserver-89b818ef-ef05-445e-8525-4f34c71a2df9 is Pending, waiting for it to be Running (with Ready = true)
May  1 16:28:44.238: INFO: The status of Pod test-webserver-89b818ef-ef05-445e-8525-4f34c71a2df9 is Pending, waiting for it to be Running (with Ready = true)
May  1 16:28:46.018: INFO: The status of Pod test-webserver-89b818ef-ef05-445e-8525-4f34c71a2df9 is Pending, waiting for it to be Running (with Ready = true)
May  1 16:28:48.056: INFO: The status of Pod test-webserver-89b818ef-ef05-445e-8525-4f34c71a2df9 is Running (Ready = false)
May  1 16:28:50.038: INFO: The status of Pod test-webserver-89b818ef-ef05-445e-8525-4f34c71a2df9 is Running (Ready = false)
May  1 16:28:52.018: INFO: The status of Pod test-webserver-89b818ef-ef05-445e-8525-4f34c71a2df9 is Running (Ready = false)
May  1 16:28:54.018: INFO: The status of Pod test-webserver-89b818ef-ef05-445e-8525-4f34c71a2df9 is Running (Ready = false)
May  1 16:28:56.018: INFO: The status of Pod test-webserver-89b818ef-ef05-445e-8525-4f34c71a2df9 is Running (Ready = false)
May  1 16:28:58.017: INFO: The status of Pod test-webserver-89b818ef-ef05-445e-8525-4f34c71a2df9 is Running (Ready = false)
May  1 16:29:00.044: INFO: The status of Pod test-webserver-89b818ef-ef05-445e-8525-4f34c71a2df9 is Running (Ready = false)
May  1 16:29:02.019: INFO: The status of Pod test-webserver-89b818ef-ef05-445e-8525-4f34c71a2df9 is Running (Ready = true)
May  1 16:29:02.022: INFO: Container started at 2020-05-01 16:28:46 +0000 UTC, pod became ready at 2020-05-01 16:29:01 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:29:02.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5323" for this suite.

• [SLOW TEST:20.946 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3585,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:29:02.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-30e9f082-d762-434f-9926-2126f0bc4376
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:29:02.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-510" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":212,"skipped":3590,"failed":0}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:29:02.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:29:06.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4016" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3594,"failed":0}
SS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:29:06.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  1 16:29:06.472: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-3436bb6c-e4ba-44f3-9ca4-6100b31369b8" in namespace "security-context-test-9099" to be "Succeeded or Failed"
May  1 16:29:06.496: INFO: Pod "alpine-nnp-false-3436bb6c-e4ba-44f3-9ca4-6100b31369b8": Phase="Pending", Reason="", readiness=false. Elapsed: 23.848251ms
May  1 16:29:08.703: INFO: Pod "alpine-nnp-false-3436bb6c-e4ba-44f3-9ca4-6100b31369b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230842231s
May  1 16:29:10.850: INFO: Pod "alpine-nnp-false-3436bb6c-e4ba-44f3-9ca4-6100b31369b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.377848528s
May  1 16:29:12.856: INFO: Pod "alpine-nnp-false-3436bb6c-e4ba-44f3-9ca4-6100b31369b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.38461231s
May  1 16:29:15.273: INFO: Pod "alpine-nnp-false-3436bb6c-e4ba-44f3-9ca4-6100b31369b8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.800772259s
May  1 16:29:17.421: INFO: Pod "alpine-nnp-false-3436bb6c-e4ba-44f3-9ca4-6100b31369b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.949551613s
May  1 16:29:17.421: INFO: Pod "alpine-nnp-false-3436bb6c-e4ba-44f3-9ca4-6100b31369b8" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:29:17.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9099" for this suite.

• [SLOW TEST:11.093 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3596,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:29:17.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
May  1 16:29:18.433: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:18.493: INFO: Number of nodes with available pods: 0
May  1 16:29:18.493: INFO: Node kali-worker is running more than one daemon pod
May  1 16:29:19.498: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:19.503: INFO: Number of nodes with available pods: 0
May  1 16:29:19.503: INFO: Node kali-worker is running more than one daemon pod
May  1 16:29:20.499: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:20.503: INFO: Number of nodes with available pods: 0
May  1 16:29:20.503: INFO: Node kali-worker is running more than one daemon pod
May  1 16:29:21.497: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:21.501: INFO: Number of nodes with available pods: 0
May  1 16:29:21.501: INFO: Node kali-worker is running more than one daemon pod
May  1 16:29:23.258: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:23.500: INFO: Number of nodes with available pods: 0
May  1 16:29:23.500: INFO: Node kali-worker is running more than one daemon pod
May  1 16:29:24.499: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:24.556: INFO: Number of nodes with available pods: 0
May  1 16:29:24.556: INFO: Node kali-worker is running more than one daemon pod
May  1 16:29:26.315: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:26.400: INFO: Number of nodes with available pods: 1
May  1 16:29:26.400: INFO: Node kali-worker2 is running more than one daemon pod
May  1 16:29:26.639: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:26.670: INFO: Number of nodes with available pods: 2
May  1 16:29:26.670: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
May  1 16:29:26.730: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:26.733: INFO: Number of nodes with available pods: 1
May  1 16:29:26.733: INFO: Node kali-worker is running more than one daemon pod
May  1 16:29:27.788: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:27.791: INFO: Number of nodes with available pods: 1
May  1 16:29:27.791: INFO: Node kali-worker is running more than one daemon pod
May  1 16:29:28.739: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:28.743: INFO: Number of nodes with available pods: 1
May  1 16:29:28.743: INFO: Node kali-worker is running more than one daemon pod
May  1 16:29:29.738: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:29.742: INFO: Number of nodes with available pods: 1
May  1 16:29:29.742: INFO: Node kali-worker is running more than one daemon pod
May  1 16:29:30.776: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:30.779: INFO: Number of nodes with available pods: 1
May  1 16:29:30.779: INFO: Node kali-worker is running more than one daemon pod
May  1 16:29:31.767: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:31.770: INFO: Number of nodes with available pods: 1
May  1 16:29:31.770: INFO: Node kali-worker is running more than one daemon pod
May  1 16:29:32.737: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:32.741: INFO: Number of nodes with available pods: 1
May  1 16:29:32.741: INFO: Node kali-worker is running more than one daemon pod
May  1 16:29:34.022: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:34.026: INFO: Number of nodes with available pods: 1
May  1 16:29:34.026: INFO: Node kali-worker is running more than one daemon pod
May  1 16:29:34.738: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:34.742: INFO: Number of nodes with available pods: 1
May  1 16:29:34.742: INFO: Node kali-worker is running more than one daemon pod
May  1 16:29:35.739: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:29:35.742: INFO: Number of nodes with available pods: 2
May  1 16:29:35.742: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7291, will wait for the garbage collector to delete the pods
May  1 16:29:35.803: INFO: Deleting DaemonSet.extensions daemon-set took: 5.934667ms
May  1 16:29:36.803: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.000237169s
May  1 16:29:43.807: INFO: Number of nodes with available pods: 0
May  1 16:29:43.807: INFO: Number of running nodes: 0, number of available pods: 0
May  1 16:29:43.810: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7291/daemonsets","resourceVersion":"672476"},"items":null}

May  1 16:29:43.812: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7291/pods","resourceVersion":"672476"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:29:43.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7291" for this suite.

• [SLOW TEST:26.382 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":215,"skipped":3615,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:29:43.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
May  1 16:29:51.134: INFO: 0 pods remaining
May  1 16:29:51.134: INFO: 0 pods has nil DeletionTimestamp
May  1 16:29:51.134: INFO: 
STEP: Gathering metrics
W0501 16:29:52.770619       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May  1 16:29:52.770: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:29:52.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-227" for this suite.

• [SLOW TEST:9.186 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":216,"skipped":3617,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:29:53.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-6997e202-9b46-44d1-ba15-8de796e230de
STEP: Creating a pod to test consume secrets
May  1 16:29:55.471: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2d89f24f-f490-48ca-85f0-4246223b207b" in namespace "projected-4852" to be "Succeeded or Failed"
May  1 16:29:55.843: INFO: Pod "pod-projected-secrets-2d89f24f-f490-48ca-85f0-4246223b207b": Phase="Pending", Reason="", readiness=false. Elapsed: 372.803084ms
May  1 16:29:58.163: INFO: Pod "pod-projected-secrets-2d89f24f-f490-48ca-85f0-4246223b207b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.691960862s
May  1 16:30:00.303: INFO: Pod "pod-projected-secrets-2d89f24f-f490-48ca-85f0-4246223b207b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.83214394s
May  1 16:30:02.359: INFO: Pod "pod-projected-secrets-2d89f24f-f490-48ca-85f0-4246223b207b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.888822095s
STEP: Saw pod success
May  1 16:30:02.359: INFO: Pod "pod-projected-secrets-2d89f24f-f490-48ca-85f0-4246223b207b" satisfied condition "Succeeded or Failed"
May  1 16:30:02.445: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-2d89f24f-f490-48ca-85f0-4246223b207b container projected-secret-volume-test: 
STEP: delete the pod
May  1 16:30:03.339: INFO: Waiting for pod pod-projected-secrets-2d89f24f-f490-48ca-85f0-4246223b207b to disappear
May  1 16:30:03.391: INFO: Pod pod-projected-secrets-2d89f24f-f490-48ca-85f0-4246223b207b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:30:03.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4852" for this suite.

• [SLOW TEST:10.400 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3646,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:30:03.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
May  1 16:30:03.508: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-965 /api/v1/namespaces/watch-965/configmaps/e2e-watch-test-watch-closed 524eb858-f8e8-401c-b20c-efa033b1d188 672704 0 2020-05-01 16:30:03 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-01 16:30:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May  1 16:30:03.509: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-965 /api/v1/namespaces/watch-965/configmaps/e2e-watch-test-watch-closed 524eb858-f8e8-401c-b20c-efa033b1d188 672706 0 2020-05-01 16:30:03 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-01 16:30:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
May  1 16:30:03.573: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-965 /api/v1/namespaces/watch-965/configmaps/e2e-watch-test-watch-closed 524eb858-f8e8-401c-b20c-efa033b1d188 672708 0 2020-05-01 16:30:03 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-01 16:30:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May  1 16:30:03.573: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-965 /api/v1/namespaces/watch-965/configmaps/e2e-watch-test-watch-closed 524eb858-f8e8-401c-b20c-efa033b1d188 672710 0 2020-05-01 16:30:03 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-01 16:30:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:30:03.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-965" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":218,"skipped":3665,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:30:03.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  1 16:30:04.646: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  1 16:30:06.701: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947405, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947405, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947405, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947404, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:30:08.704: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947405, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947405, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947405, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947404, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  1 16:30:11.807: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:30:13.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6925" for this suite.
STEP: Destroying namespace "webhook-6925-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.362 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":219,"skipped":3670,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:30:13.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  1 16:30:14.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
May  1 16:30:17.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1456 create -f -'
May  1 16:30:24.647: INFO: stderr: ""
May  1 16:30:24.647: INFO: stdout: "e2e-test-crd-publish-openapi-430-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May  1 16:30:24.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1456 delete e2e-test-crd-publish-openapi-430-crds test-foo'
May  1 16:30:25.152: INFO: stderr: ""
May  1 16:30:25.152: INFO: stdout: "e2e-test-crd-publish-openapi-430-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
May  1 16:30:25.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1456 apply -f -'
May  1 16:30:25.691: INFO: stderr: ""
May  1 16:30:25.691: INFO: stdout: "e2e-test-crd-publish-openapi-430-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May  1 16:30:25.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1456 delete e2e-test-crd-publish-openapi-430-crds test-foo'
May  1 16:30:25.850: INFO: stderr: ""
May  1 16:30:25.850: INFO: stdout: "e2e-test-crd-publish-openapi-430-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
May  1 16:30:25.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1456 create -f -'
May  1 16:30:26.111: INFO: rc: 1
May  1 16:30:26.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1456 apply -f -'
May  1 16:30:26.422: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
May  1 16:30:26.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1456 create -f -'
May  1 16:30:26.669: INFO: rc: 1
May  1 16:30:26.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1456 apply -f -'
May  1 16:30:26.934: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
May  1 16:30:26.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-430-crds'
May  1 16:30:27.618: INFO: stderr: ""
May  1 16:30:27.618: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-430-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
May  1 16:30:27.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-430-crds.metadata'
May  1 16:30:28.190: INFO: stderr: ""
May  1 16:30:28.190: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-430-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
May  1 16:30:28.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-430-crds.spec'
May  1 16:30:28.565: INFO: stderr: ""
May  1 16:30:28.565: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-430-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
May  1 16:30:28.565: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-430-crds.spec.bars'
May  1 16:30:29.005: INFO: stderr: ""
May  1 16:30:29.005: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-430-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
May  1 16:30:29.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-430-crds.spec.bars2'
May  1 16:30:29.528: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:30:32.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1456" for this suite.

• [SLOW TEST:18.522 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":220,"skipped":3679,"failed":0}
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:30:32.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override all
May  1 16:30:32.952: INFO: Waiting up to 5m0s for pod "client-containers-ced7b162-9458-4ad8-95de-08b771f57921" in namespace "containers-7929" to be "Succeeded or Failed"
May  1 16:30:32.972: INFO: Pod "client-containers-ced7b162-9458-4ad8-95de-08b771f57921": Phase="Pending", Reason="", readiness=false. Elapsed: 19.802229ms
May  1 16:30:35.171: INFO: Pod "client-containers-ced7b162-9458-4ad8-95de-08b771f57921": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219391041s
May  1 16:30:37.320: INFO: Pod "client-containers-ced7b162-9458-4ad8-95de-08b771f57921": Phase="Pending", Reason="", readiness=false. Elapsed: 4.368283553s
May  1 16:30:39.337: INFO: Pod "client-containers-ced7b162-9458-4ad8-95de-08b771f57921": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.384698853s
STEP: Saw pod success
May  1 16:30:39.337: INFO: Pod "client-containers-ced7b162-9458-4ad8-95de-08b771f57921" satisfied condition "Succeeded or Failed"
May  1 16:30:39.339: INFO: Trying to get logs from node kali-worker pod client-containers-ced7b162-9458-4ad8-95de-08b771f57921 container test-container: 
STEP: delete the pod
May  1 16:30:39.423: INFO: Waiting for pod client-containers-ced7b162-9458-4ad8-95de-08b771f57921 to disappear
May  1 16:30:39.444: INFO: Pod client-containers-ced7b162-9458-4ad8-95de-08b771f57921 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:30:39.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7929" for this suite.

• [SLOW TEST:6.982 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3680,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:30:39.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  1 16:30:39.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-3707
I0501 16:30:39.590078       7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3707, replica count: 1
I0501 16:30:40.640365       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 16:30:41.640551       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 16:30:42.640796       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 16:30:43.641375       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  1 16:30:43.783: INFO: Created: latency-svc-wtspk
May  1 16:30:43.814: INFO: Got endpoints: latency-svc-wtspk [71.885962ms]
May  1 16:30:43.872: INFO: Created: latency-svc-fb7p9
May  1 16:30:43.901: INFO: Got endpoints: latency-svc-fb7p9 [87.119825ms]
May  1 16:30:43.937: INFO: Created: latency-svc-n5dwl
May  1 16:30:43.955: INFO: Got endpoints: latency-svc-n5dwl [140.852672ms]
May  1 16:30:43.997: INFO: Created: latency-svc-bjt7k
May  1 16:30:44.000: INFO: Got endpoints: latency-svc-bjt7k [185.830914ms]
May  1 16:30:44.027: INFO: Created: latency-svc-h8ktb
May  1 16:30:44.039: INFO: Got endpoints: latency-svc-h8ktb [224.560219ms]
May  1 16:30:44.059: INFO: Created: latency-svc-4vcg6
May  1 16:30:44.075: INFO: Got endpoints: latency-svc-4vcg6 [260.852052ms]
May  1 16:30:44.165: INFO: Created: latency-svc-zsgbs
May  1 16:30:44.183: INFO: Got endpoints: latency-svc-zsgbs [368.61255ms]
May  1 16:30:44.225: INFO: Created: latency-svc-v9xl9
May  1 16:30:44.238: INFO: Got endpoints: latency-svc-v9xl9 [423.883068ms]
May  1 16:30:44.261: INFO: Created: latency-svc-v75j2
May  1 16:30:44.309: INFO: Got endpoints: latency-svc-v75j2 [494.76802ms]
May  1 16:30:44.330: INFO: Created: latency-svc-llrkw
May  1 16:30:44.361: INFO: Got endpoints: latency-svc-llrkw [547.326047ms]
May  1 16:30:44.388: INFO: Created: latency-svc-w8sjt
May  1 16:30:44.406: INFO: Got endpoints: latency-svc-w8sjt [592.470371ms]
May  1 16:30:44.476: INFO: Created: latency-svc-jvnwc
May  1 16:30:44.502: INFO: Got endpoints: latency-svc-jvnwc [687.832045ms]
May  1 16:30:44.626: INFO: Created: latency-svc-5wpfs
May  1 16:30:44.629: INFO: Got endpoints: latency-svc-5wpfs [815.318204ms]
May  1 16:30:44.688: INFO: Created: latency-svc-snk6h
May  1 16:30:44.700: INFO: Got endpoints: latency-svc-snk6h [886.343847ms]
May  1 16:30:44.769: INFO: Created: latency-svc-zxzs9
May  1 16:30:44.782: INFO: Got endpoints: latency-svc-zxzs9 [968.453154ms]
May  1 16:30:44.814: INFO: Created: latency-svc-hxv6f
May  1 16:30:44.827: INFO: Got endpoints: latency-svc-hxv6f [1.01263594s]
May  1 16:30:44.906: INFO: Created: latency-svc-kv8pj
May  1 16:30:44.909: INFO: Got endpoints: latency-svc-kv8pj [1.00835261s]
May  1 16:30:44.941: INFO: Created: latency-svc-v4dzc
May  1 16:30:44.954: INFO: Got endpoints: latency-svc-v4dzc [998.541821ms]
May  1 16:30:44.982: INFO: Created: latency-svc-n5dn8
May  1 16:30:45.044: INFO: Got endpoints: latency-svc-n5dn8 [1.044670501s]
May  1 16:30:45.064: INFO: Created: latency-svc-hxnd7
May  1 16:30:45.086: INFO: Got endpoints: latency-svc-hxnd7 [1.04746629s]
May  1 16:30:45.188: INFO: Created: latency-svc-7hjlt
May  1 16:30:45.192: INFO: Got endpoints: latency-svc-7hjlt [1.117675721s]
May  1 16:30:45.228: INFO: Created: latency-svc-mldhw
May  1 16:30:45.242: INFO: Got endpoints: latency-svc-mldhw [1.059552841s]
May  1 16:30:45.263: INFO: Created: latency-svc-9fb9s
May  1 16:30:45.272: INFO: Got endpoints: latency-svc-9fb9s [1.03466853s]
May  1 16:30:45.332: INFO: Created: latency-svc-pz8ft
May  1 16:30:45.337: INFO: Got endpoints: latency-svc-pz8ft [1.028551206s]
May  1 16:30:45.364: INFO: Created: latency-svc-2qht7
May  1 16:30:45.375: INFO: Got endpoints: latency-svc-2qht7 [1.014450531s]
May  1 16:30:45.396: INFO: Created: latency-svc-jtppb
May  1 16:30:45.423: INFO: Got endpoints: latency-svc-jtppb [1.017080674s]
May  1 16:30:45.494: INFO: Created: latency-svc-fxgzx
May  1 16:30:45.519: INFO: Got endpoints: latency-svc-fxgzx [1.017840458s]
May  1 16:30:45.539: INFO: Created: latency-svc-nff4v
May  1 16:30:45.582: INFO: Got endpoints: latency-svc-nff4v [952.614632ms]
May  1 16:30:45.630: INFO: Created: latency-svc-cs469
May  1 16:30:45.656: INFO: Got endpoints: latency-svc-cs469 [955.520554ms]
May  1 16:30:45.726: INFO: Created: latency-svc-457ft
May  1 16:30:46.147: INFO: Got endpoints: latency-svc-457ft [1.364563437s]
May  1 16:30:46.151: INFO: Created: latency-svc-tpwd5
May  1 16:30:46.179: INFO: Got endpoints: latency-svc-tpwd5 [1.352871308s]
May  1 16:30:46.222: INFO: Created: latency-svc-cbqbp
May  1 16:30:46.327: INFO: Got endpoints: latency-svc-cbqbp [1.417494449s]
May  1 16:30:46.482: INFO: Created: latency-svc-7mbc9
May  1 16:30:46.776: INFO: Got endpoints: latency-svc-7mbc9 [1.822095191s]
May  1 16:30:46.925: INFO: Created: latency-svc-5d9tp
May  1 16:30:46.936: INFO: Got endpoints: latency-svc-5d9tp [1.891076136s]
May  1 16:30:47.004: INFO: Created: latency-svc-ghxv7
May  1 16:30:47.059: INFO: Got endpoints: latency-svc-ghxv7 [1.972671443s]
May  1 16:30:47.082: INFO: Created: latency-svc-mjc65
May  1 16:30:47.092: INFO: Got endpoints: latency-svc-mjc65 [1.899940856s]
May  1 16:30:47.149: INFO: Created: latency-svc-km229
May  1 16:30:47.219: INFO: Got endpoints: latency-svc-km229 [1.976441635s]
May  1 16:30:47.225: INFO: Created: latency-svc-bwb96
May  1 16:30:47.272: INFO: Got endpoints: latency-svc-bwb96 [1.999720555s]
May  1 16:30:47.295: INFO: Created: latency-svc-qqdd6
May  1 16:30:47.318: INFO: Got endpoints: latency-svc-qqdd6 [1.980072194s]
May  1 16:30:47.388: INFO: Created: latency-svc-468qz
May  1 16:30:47.399: INFO: Got endpoints: latency-svc-468qz [2.023790324s]
May  1 16:30:47.461: INFO: Created: latency-svc-pzbgs
May  1 16:30:47.542: INFO: Got endpoints: latency-svc-pzbgs [2.118907161s]
May  1 16:30:47.575: INFO: Created: latency-svc-2kf8d
May  1 16:30:47.603: INFO: Got endpoints: latency-svc-2kf8d [2.083691749s]
May  1 16:30:47.691: INFO: Created: latency-svc-z6zgw
May  1 16:30:47.730: INFO: Got endpoints: latency-svc-z6zgw [2.147921989s]
May  1 16:30:47.730: INFO: Created: latency-svc-6v8br
May  1 16:30:47.785: INFO: Got endpoints: latency-svc-6v8br [2.129544406s]
May  1 16:30:47.856: INFO: Created: latency-svc-d4pzq
May  1 16:30:47.892: INFO: Got endpoints: latency-svc-d4pzq [1.744955779s]
May  1 16:30:47.928: INFO: Created: latency-svc-qppqs
May  1 16:30:47.946: INFO: Got endpoints: latency-svc-qppqs [1.766088942s]
May  1 16:30:47.991: INFO: Created: latency-svc-qdrrk
May  1 16:30:47.994: INFO: Got endpoints: latency-svc-qdrrk [1.667282397s]
May  1 16:30:48.038: INFO: Created: latency-svc-rvb49
May  1 16:30:48.048: INFO: Got endpoints: latency-svc-rvb49 [1.272058137s]
May  1 16:30:48.066: INFO: Created: latency-svc-lbqhf
May  1 16:30:48.140: INFO: Got endpoints: latency-svc-lbqhf [1.204816786s]
May  1 16:30:48.176: INFO: Created: latency-svc-4k4cg
May  1 16:30:48.235: INFO: Got endpoints: latency-svc-4k4cg [1.17608582s]
May  1 16:30:48.446: INFO: Created: latency-svc-tcbtm
May  1 16:30:48.499: INFO: Got endpoints: latency-svc-tcbtm [1.406579327s]
May  1 16:30:48.661: INFO: Created: latency-svc-jx42z
May  1 16:30:49.196: INFO: Got endpoints: latency-svc-jx42z [1.977145189s]
May  1 16:30:49.203: INFO: Created: latency-svc-vs4pt
May  1 16:30:49.203: INFO: Got endpoints: latency-svc-vs4pt [1.931173882s]
May  1 16:30:49.472: INFO: Created: latency-svc-vcvf9
May  1 16:30:49.492: INFO: Got endpoints: latency-svc-vcvf9 [2.17483032s]
May  1 16:30:49.546: INFO: Created: latency-svc-gsthk
May  1 16:30:50.093: INFO: Got endpoints: latency-svc-gsthk [2.693509852s]
May  1 16:30:50.399: INFO: Created: latency-svc-g6vrg
May  1 16:30:50.866: INFO: Got endpoints: latency-svc-g6vrg [3.323995175s]
May  1 16:30:51.374: INFO: Created: latency-svc-psfrw
May  1 16:30:51.417: INFO: Got endpoints: latency-svc-psfrw [3.813584362s]
May  1 16:30:52.207: INFO: Created: latency-svc-sxqvh
May  1 16:30:52.238: INFO: Got endpoints: latency-svc-sxqvh [4.507965358s]
May  1 16:30:52.903: INFO: Created: latency-svc-sgt4x
May  1 16:30:52.920: INFO: Got endpoints: latency-svc-sgt4x [5.13475646s]
May  1 16:30:53.168: INFO: Created: latency-svc-x2dp6
May  1 16:30:53.459: INFO: Got endpoints: latency-svc-x2dp6 [5.566998728s]
May  1 16:30:53.462: INFO: Created: latency-svc-gmwts
May  1 16:30:53.520: INFO: Got endpoints: latency-svc-gmwts [5.574621689s]
May  1 16:30:53.644: INFO: Created: latency-svc-s79kf
May  1 16:30:53.648: INFO: Got endpoints: latency-svc-s79kf [5.653994085s]
May  1 16:30:53.725: INFO: Created: latency-svc-cmrmd
May  1 16:30:53.742: INFO: Got endpoints: latency-svc-cmrmd [5.694016053s]
May  1 16:30:53.847: INFO: Created: latency-svc-fz2j4
May  1 16:30:53.874: INFO: Got endpoints: latency-svc-fz2j4 [5.733910699s]
May  1 16:30:53.942: INFO: Created: latency-svc-tc4nz
May  1 16:30:54.345: INFO: Got endpoints: latency-svc-tc4nz [6.110031513s]
May  1 16:30:54.349: INFO: Created: latency-svc-sw8dt
May  1 16:30:54.360: INFO: Got endpoints: latency-svc-sw8dt [5.861290274s]
May  1 16:30:54.410: INFO: Created: latency-svc-st8v9
May  1 16:30:54.420: INFO: Got endpoints: latency-svc-st8v9 [5.223954352s]
May  1 16:30:54.507: INFO: Created: latency-svc-sg8jt
May  1 16:30:54.531: INFO: Got endpoints: latency-svc-sg8jt [5.327381436s]
May  1 16:30:54.585: INFO: Created: latency-svc-ztjd7
May  1 16:30:54.674: INFO: Got endpoints: latency-svc-ztjd7 [5.181099984s]
May  1 16:30:54.735: INFO: Created: latency-svc-j8gmw
May  1 16:30:54.769: INFO: Got endpoints: latency-svc-j8gmw [4.675579945s]
May  1 16:30:55.162: INFO: Created: latency-svc-tpj7t
May  1 16:30:55.328: INFO: Got endpoints: latency-svc-tpj7t [4.460964508s]
May  1 16:30:55.379: INFO: Created: latency-svc-p7pzh
May  1 16:30:55.687: INFO: Got endpoints: latency-svc-p7pzh [4.270372455s]
May  1 16:30:55.768: INFO: Created: latency-svc-gb5qm
May  1 16:30:56.046: INFO: Got endpoints: latency-svc-gb5qm [3.807607728s]
May  1 16:30:56.105: INFO: Created: latency-svc-n5vrx
May  1 16:30:56.123: INFO: Got endpoints: latency-svc-n5vrx [3.202378185s]
May  1 16:30:56.193: INFO: Created: latency-svc-9k42z
May  1 16:30:56.201: INFO: Got endpoints: latency-svc-9k42z [2.742074299s]
May  1 16:30:56.241: INFO: Created: latency-svc-z86vf
May  1 16:30:56.332: INFO: Got endpoints: latency-svc-z86vf [2.812154933s]
May  1 16:30:56.530: INFO: Created: latency-svc-b4rmp
May  1 16:30:56.566: INFO: Got endpoints: latency-svc-b4rmp [2.918007168s]
May  1 16:30:57.237: INFO: Created: latency-svc-6rbn7
May  1 16:30:57.295: INFO: Got endpoints: latency-svc-6rbn7 [3.552588593s]
May  1 16:30:57.405: INFO: Created: latency-svc-c4sjj
May  1 16:30:57.431: INFO: Got endpoints: latency-svc-c4sjj [3.556418535s]
May  1 16:30:57.498: INFO: Created: latency-svc-845z9
May  1 16:30:57.553: INFO: Got endpoints: latency-svc-845z9 [3.207839706s]
May  1 16:30:57.597: INFO: Created: latency-svc-94fzh
May  1 16:30:57.618: INFO: Got endpoints: latency-svc-94fzh [3.257141801s]
May  1 16:30:57.955: INFO: Created: latency-svc-llq8d
May  1 16:30:58.398: INFO: Got endpoints: latency-svc-llq8d [3.97775454s]
May  1 16:30:58.626: INFO: Created: latency-svc-hnzth
May  1 16:30:58.974: INFO: Got endpoints: latency-svc-hnzth [4.443378867s]
May  1 16:30:59.167: INFO: Created: latency-svc-zmvnp
May  1 16:30:59.188: INFO: Got endpoints: latency-svc-zmvnp [4.514596689s]
May  1 16:30:59.504: INFO: Created: latency-svc-hhpmz
May  1 16:30:59.777: INFO: Got endpoints: latency-svc-hhpmz [5.008038228s]
May  1 16:30:59.996: INFO: Created: latency-svc-2cvr5
May  1 16:31:00.042: INFO: Got endpoints: latency-svc-2cvr5 [4.714684484s]
May  1 16:31:00.190: INFO: Created: latency-svc-t6pnq
May  1 16:31:00.192: INFO: Got endpoints: latency-svc-t6pnq [4.504900161s]
May  1 16:31:00.365: INFO: Created: latency-svc-fzj8p
May  1 16:31:00.375: INFO: Got endpoints: latency-svc-fzj8p [4.329603533s]
May  1 16:31:00.652: INFO: Created: latency-svc-rf28n
May  1 16:31:00.748: INFO: Got endpoints: latency-svc-rf28n [4.624958543s]
May  1 16:31:00.824: INFO: Created: latency-svc-xscgw
May  1 16:31:00.852: INFO: Got endpoints: latency-svc-xscgw [4.650248646s]
May  1 16:31:00.884: INFO: Created: latency-svc-qzkvh
May  1 16:31:00.946: INFO: Got endpoints: latency-svc-qzkvh [4.613043825s]
May  1 16:31:00.975: INFO: Created: latency-svc-8lf5v
May  1 16:31:01.005: INFO: Got endpoints: latency-svc-8lf5v [4.438661535s]
May  1 16:31:01.076: INFO: Created: latency-svc-5fmnd
May  1 16:31:01.333: INFO: Got endpoints: latency-svc-5fmnd [4.038331686s]
May  1 16:31:01.555: INFO: Created: latency-svc-8xgk8
May  1 16:31:01.652: INFO: Created: latency-svc-mhwqm
May  1 16:31:01.653: INFO: Got endpoints: latency-svc-8xgk8 [4.22162198s]
May  1 16:31:01.775: INFO: Got endpoints: latency-svc-mhwqm [4.222136428s]
May  1 16:31:02.243: INFO: Created: latency-svc-47vfb
May  1 16:31:02.524: INFO: Got endpoints: latency-svc-47vfb [4.906292499s]
May  1 16:31:02.525: INFO: Created: latency-svc-7wthv
May  1 16:31:02.576: INFO: Got endpoints: latency-svc-7wthv [4.178433746s]
May  1 16:31:02.667: INFO: Created: latency-svc-kfc7s
May  1 16:31:02.687: INFO: Got endpoints: latency-svc-kfc7s [3.712577356s]
May  1 16:31:02.715: INFO: Created: latency-svc-w8srp
May  1 16:31:02.731: INFO: Got endpoints: latency-svc-w8srp [3.542409812s]
May  1 16:31:02.854: INFO: Created: latency-svc-5m5w4
May  1 16:31:02.863: INFO: Got endpoints: latency-svc-5m5w4 [3.086236646s]
May  1 16:31:02.889: INFO: Created: latency-svc-gz7rl
May  1 16:31:02.905: INFO: Got endpoints: latency-svc-gz7rl [2.862870902s]
May  1 16:31:02.931: INFO: Created: latency-svc-gdsdd
May  1 16:31:02.997: INFO: Got endpoints: latency-svc-gdsdd [2.805250482s]
May  1 16:31:03.022: INFO: Created: latency-svc-b58z4
May  1 16:31:03.038: INFO: Got endpoints: latency-svc-b58z4 [2.663147053s]
May  1 16:31:03.065: INFO: Created: latency-svc-qk2kf
May  1 16:31:03.074: INFO: Got endpoints: latency-svc-qk2kf [2.326332576s]
May  1 16:31:03.135: INFO: Created: latency-svc-5266x
May  1 16:31:03.139: INFO: Got endpoints: latency-svc-5266x [2.287625217s]
May  1 16:31:03.171: INFO: Created: latency-svc-7rzct
May  1 16:31:03.190: INFO: Got endpoints: latency-svc-7rzct [2.243958957s]
May  1 16:31:03.213: INFO: Created: latency-svc-qpqt4
May  1 16:31:03.232: INFO: Got endpoints: latency-svc-qpqt4 [2.22671725s]
May  1 16:31:03.268: INFO: Created: latency-svc-wg22t
May  1 16:31:03.286: INFO: Got endpoints: latency-svc-wg22t [1.953299438s]
May  1 16:31:03.311: INFO: Created: latency-svc-w9nrw
May  1 16:31:03.335: INFO: Got endpoints: latency-svc-w9nrw [1.681948802s]
May  1 16:31:03.411: INFO: Created: latency-svc-574zw
May  1 16:31:03.419: INFO: Got endpoints: latency-svc-574zw [1.643583127s]
May  1 16:31:03.441: INFO: Created: latency-svc-4krjg
May  1 16:31:03.453: INFO: Got endpoints: latency-svc-4krjg [928.761521ms]
May  1 16:31:03.640: INFO: Created: latency-svc-kp9qq
May  1 16:31:03.675: INFO: Got endpoints: latency-svc-kp9qq [1.098745873s]
May  1 16:31:03.705: INFO: Created: latency-svc-lj575
May  1 16:31:03.782: INFO: Got endpoints: latency-svc-lj575 [1.094542872s]
May  1 16:31:03.802: INFO: Created: latency-svc-mgbpc
May  1 16:31:03.807: INFO: Got endpoints: latency-svc-mgbpc [1.076568413s]
May  1 16:31:03.839: INFO: Created: latency-svc-vph68
May  1 16:31:03.856: INFO: Got endpoints: latency-svc-vph68 [992.798881ms]
May  1 16:31:03.943: INFO: Created: latency-svc-r5rsl
May  1 16:31:03.959: INFO: Got endpoints: latency-svc-r5rsl [1.053943498s]
May  1 16:31:04.012: INFO: Created: latency-svc-jqbbg
May  1 16:31:04.190: INFO: Got endpoints: latency-svc-jqbbg [1.192534081s]
May  1 16:31:04.374: INFO: Created: latency-svc-vfb7p
May  1 16:31:04.428: INFO: Got endpoints: latency-svc-vfb7p [1.389073647s]
May  1 16:31:04.830: INFO: Created: latency-svc-btgxd
May  1 16:31:04.884: INFO: Got endpoints: latency-svc-btgxd [1.809530115s]
May  1 16:31:04.884: INFO: Created: latency-svc-v88xq
May  1 16:31:05.036: INFO: Got endpoints: latency-svc-v88xq [1.896994681s]
May  1 16:31:05.172: INFO: Created: latency-svc-5wfkd
May  1 16:31:05.201: INFO: Got endpoints: latency-svc-5wfkd [2.011680475s]
May  1 16:31:05.291: INFO: Created: latency-svc-fxntk
May  1 16:31:05.297: INFO: Got endpoints: latency-svc-fxntk [2.064666279s]
May  1 16:31:05.332: INFO: Created: latency-svc-h4c8f
May  1 16:31:05.366: INFO: Got endpoints: latency-svc-h4c8f [2.079295572s]
May  1 16:31:05.390: INFO: Created: latency-svc-qqmt7
May  1 16:31:05.441: INFO: Got endpoints: latency-svc-qqmt7 [2.106408586s]
May  1 16:31:05.478: INFO: Created: latency-svc-zwwb2
May  1 16:31:05.788: INFO: Got endpoints: latency-svc-zwwb2 [2.369089634s]
May  1 16:31:05.944: INFO: Created: latency-svc-z2b2q
May  1 16:31:05.948: INFO: Got endpoints: latency-svc-z2b2q [2.494890129s]
May  1 16:31:05.988: INFO: Created: latency-svc-9f2z2
May  1 16:31:06.035: INFO: Got endpoints: latency-svc-9f2z2 [2.359860218s]
May  1 16:31:06.102: INFO: Created: latency-svc-flqxl
May  1 16:31:06.134: INFO: Got endpoints: latency-svc-flqxl [2.352565291s]
May  1 16:31:06.168: INFO: Created: latency-svc-c6dh4
May  1 16:31:06.187: INFO: Got endpoints: latency-svc-c6dh4 [2.379153138s]
May  1 16:31:06.287: INFO: Created: latency-svc-pnzg5
May  1 16:31:06.347: INFO: Got endpoints: latency-svc-pnzg5 [2.491469837s]
May  1 16:31:06.405: INFO: Created: latency-svc-qstwn
May  1 16:31:06.409: INFO: Got endpoints: latency-svc-qstwn [2.449780344s]
May  1 16:31:06.681: INFO: Created: latency-svc-xmxkw
May  1 16:31:06.699: INFO: Got endpoints: latency-svc-xmxkw [2.509078162s]
May  1 16:31:06.998: INFO: Created: latency-svc-njrkw
May  1 16:31:07.024: INFO: Got endpoints: latency-svc-njrkw [2.596004962s]
May  1 16:31:07.097: INFO: Created: latency-svc-p2xcv
May  1 16:31:07.135: INFO: Got endpoints: latency-svc-p2xcv [2.251161057s]
May  1 16:31:07.164: INFO: Created: latency-svc-dcbdd
May  1 16:31:07.173: INFO: Got endpoints: latency-svc-dcbdd [2.136945562s]
May  1 16:31:07.231: INFO: Created: latency-svc-9xcr4
May  1 16:31:07.266: INFO: Got endpoints: latency-svc-9xcr4 [2.065019551s]
May  1 16:31:07.423: INFO: Created: latency-svc-w2vpw
May  1 16:31:07.483: INFO: Got endpoints: latency-svc-w2vpw [2.186158075s]
May  1 16:31:07.561: INFO: Created: latency-svc-b8p6l
May  1 16:31:07.564: INFO: Got endpoints: latency-svc-b8p6l [2.198604761s]
May  1 16:31:07.620: INFO: Created: latency-svc-hwl5n
May  1 16:31:07.692: INFO: Got endpoints: latency-svc-hwl5n [2.250979412s]
May  1 16:31:07.747: INFO: Created: latency-svc-wkcnt
May  1 16:31:07.763: INFO: Got endpoints: latency-svc-wkcnt [1.975288334s]
May  1 16:31:07.866: INFO: Created: latency-svc-z5hqb
May  1 16:31:07.914: INFO: Got endpoints: latency-svc-z5hqb [1.965881374s]
May  1 16:31:08.226: INFO: Created: latency-svc-kpkgm
May  1 16:31:08.246: INFO: Got endpoints: latency-svc-kpkgm [2.210712806s]
May  1 16:31:08.459: INFO: Created: latency-svc-nprt9
May  1 16:31:08.466: INFO: Got endpoints: latency-svc-nprt9 [2.331608313s]
May  1 16:31:08.668: INFO: Created: latency-svc-nbl25
May  1 16:31:08.700: INFO: Got endpoints: latency-svc-nbl25 [2.513319194s]
May  1 16:31:08.968: INFO: Created: latency-svc-g74qj
May  1 16:31:08.973: INFO: Got endpoints: latency-svc-g74qj [2.625405535s]
May  1 16:31:09.285: INFO: Created: latency-svc-26x6w
May  1 16:31:09.301: INFO: Got endpoints: latency-svc-26x6w [2.891925075s]
May  1 16:31:09.488: INFO: Created: latency-svc-wgtq7
May  1 16:31:09.505: INFO: Got endpoints: latency-svc-wgtq7 [2.806249897s]
May  1 16:31:09.708: INFO: Created: latency-svc-k5vj7
May  1 16:31:09.744: INFO: Got endpoints: latency-svc-k5vj7 [2.720453055s]
May  1 16:31:10.345: INFO: Created: latency-svc-x5qjl
May  1 16:31:10.506: INFO: Got endpoints: latency-svc-x5qjl [3.371566053s]
May  1 16:31:10.567: INFO: Created: latency-svc-klp95
May  1 16:31:10.722: INFO: Got endpoints: latency-svc-klp95 [3.548426813s]
May  1 16:31:10.784: INFO: Created: latency-svc-4jgd7
May  1 16:31:10.802: INFO: Got endpoints: latency-svc-4jgd7 [3.535365473s]
May  1 16:31:10.947: INFO: Created: latency-svc-6hrqd
May  1 16:31:11.190: INFO: Got endpoints: latency-svc-6hrqd [3.706766299s]
May  1 16:31:11.712: INFO: Created: latency-svc-blbqw
May  1 16:31:12.142: INFO: Got endpoints: latency-svc-blbqw [4.577472049s]
May  1 16:31:12.399: INFO: Created: latency-svc-4ghd6
May  1 16:31:12.662: INFO: Got endpoints: latency-svc-4ghd6 [4.969859996s]
May  1 16:31:12.704: INFO: Created: latency-svc-zqgbp
May  1 16:31:12.909: INFO: Got endpoints: latency-svc-zqgbp [5.145085119s]
May  1 16:31:12.957: INFO: Created: latency-svc-d6qht
May  1 16:31:13.231: INFO: Got endpoints: latency-svc-d6qht [5.31767004s]
May  1 16:31:13.472: INFO: Created: latency-svc-69nff
May  1 16:31:13.483: INFO: Got endpoints: latency-svc-69nff [5.236644415s]
May  1 16:31:13.562: INFO: Created: latency-svc-bnsjt
May  1 16:31:13.608: INFO: Got endpoints: latency-svc-bnsjt [5.142225462s]
May  1 16:31:13.652: INFO: Created: latency-svc-m7n9p
May  1 16:31:13.680: INFO: Got endpoints: latency-svc-m7n9p [4.980195306s]
May  1 16:31:13.706: INFO: Created: latency-svc-7dw7t
May  1 16:31:13.782: INFO: Got endpoints: latency-svc-7dw7t [4.809201784s]
May  1 16:31:13.820: INFO: Created: latency-svc-7sc6k
May  1 16:31:13.967: INFO: Got endpoints: latency-svc-7sc6k [4.666009966s]
May  1 16:31:13.993: INFO: Created: latency-svc-sxmtg
May  1 16:31:14.048: INFO: Got endpoints: latency-svc-sxmtg [4.542260977s]
May  1 16:31:14.390: INFO: Created: latency-svc-tskx9
May  1 16:31:14.476: INFO: Got endpoints: latency-svc-tskx9 [4.731527325s]
May  1 16:31:14.692: INFO: Created: latency-svc-tdkg8
May  1 16:31:14.738: INFO: Got endpoints: latency-svc-tdkg8 [4.231150418s]
May  1 16:31:14.779: INFO: Created: latency-svc-kwkb6
May  1 16:31:14.925: INFO: Got endpoints: latency-svc-kwkb6 [4.203149608s]
May  1 16:31:14.946: INFO: Created: latency-svc-sbhl9
May  1 16:31:15.128: INFO: Got endpoints: latency-svc-sbhl9 [4.325611282s]
May  1 16:31:15.303: INFO: Created: latency-svc-7zfrm
May  1 16:31:15.319: INFO: Got endpoints: latency-svc-7zfrm [4.129411693s]
May  1 16:31:15.381: INFO: Created: latency-svc-wd2s6
May  1 16:31:15.399: INFO: Got endpoints: latency-svc-wd2s6 [3.257253556s]
May  1 16:31:15.464: INFO: Created: latency-svc-2phsp
May  1 16:31:15.470: INFO: Got endpoints: latency-svc-2phsp [2.807562571s]
May  1 16:31:15.500: INFO: Created: latency-svc-l4php
May  1 16:31:15.536: INFO: Got endpoints: latency-svc-l4php [2.626971929s]
May  1 16:31:15.608: INFO: Created: latency-svc-2hx77
May  1 16:31:15.617: INFO: Got endpoints: latency-svc-2hx77 [2.385542334s]
May  1 16:31:15.643: INFO: Created: latency-svc-l74mm
May  1 16:31:15.666: INFO: Got endpoints: latency-svc-l74mm [2.183012674s]
May  1 16:31:15.764: INFO: Created: latency-svc-5j7bz
May  1 16:31:15.800: INFO: Created: latency-svc-7d8s7
May  1 16:31:15.800: INFO: Got endpoints: latency-svc-5j7bz [2.19163313s]
May  1 16:31:15.843: INFO: Got endpoints: latency-svc-7d8s7 [2.163187411s]
May  1 16:31:15.943: INFO: Created: latency-svc-pm5j2
May  1 16:31:15.980: INFO: Got endpoints: latency-svc-pm5j2 [2.197427573s]
May  1 16:31:16.027: INFO: Created: latency-svc-4gr7j
May  1 16:31:16.095: INFO: Got endpoints: latency-svc-4gr7j [2.127643262s]
May  1 16:31:16.097: INFO: Created: latency-svc-pw6jr
May  1 16:31:16.111: INFO: Got endpoints: latency-svc-pw6jr [2.062990417s]
May  1 16:31:16.155: INFO: Created: latency-svc-trrr8
May  1 16:31:16.178: INFO: Got endpoints: latency-svc-trrr8 [1.702656009s]
May  1 16:31:16.236: INFO: Created: latency-svc-slnkx
May  1 16:31:16.244: INFO: Got endpoints: latency-svc-slnkx [1.506029949s]
May  1 16:31:16.281: INFO: Created: latency-svc-b9bgg
May  1 16:31:16.292: INFO: Got endpoints: latency-svc-b9bgg [1.366549683s]
May  1 16:31:16.329: INFO: Created: latency-svc-lbzbq
May  1 16:31:16.336: INFO: Got endpoints: latency-svc-lbzbq [1.208054873s]
May  1 16:31:16.401: INFO: Created: latency-svc-kx7s7
May  1 16:31:16.437: INFO: Got endpoints: latency-svc-kx7s7 [1.118159464s]
May  1 16:31:16.620: INFO: Created: latency-svc-b2v7r
May  1 16:31:16.670: INFO: Got endpoints: latency-svc-b2v7r [1.270796247s]
May  1 16:31:16.708: INFO: Created: latency-svc-9rtkb
May  1 16:31:16.788: INFO: Got endpoints: latency-svc-9rtkb [1.318097539s]
May  1 16:31:16.789: INFO: Created: latency-svc-ddmm9
May  1 16:31:16.804: INFO: Got endpoints: latency-svc-ddmm9 [1.268305001s]
May  1 16:31:16.980: INFO: Created: latency-svc-jwzw7
May  1 16:31:16.983: INFO: Got endpoints: latency-svc-jwzw7 [1.366119336s]
May  1 16:31:17.398: INFO: Created: latency-svc-4wgvm
May  1 16:31:17.621: INFO: Got endpoints: latency-svc-4wgvm [1.955226439s]
May  1 16:31:17.844: INFO: Created: latency-svc-qc95c
May  1 16:31:17.861: INFO: Got endpoints: latency-svc-qc95c [2.061003822s]
May  1 16:31:18.048: INFO: Created: latency-svc-xbp4c
May  1 16:31:18.075: INFO: Got endpoints: latency-svc-xbp4c [2.231461317s]
May  1 16:31:18.219: INFO: Created: latency-svc-rjjwb
May  1 16:31:18.255: INFO: Got endpoints: latency-svc-rjjwb [2.275453924s]
May  1 16:31:18.297: INFO: Created: latency-svc-vhpps
May  1 16:31:18.307: INFO: Got endpoints: latency-svc-vhpps [2.212024088s]
May  1 16:31:18.380: INFO: Created: latency-svc-m4m5f
May  1 16:31:18.384: INFO: Got endpoints: latency-svc-m4m5f [2.273284778s]
May  1 16:31:18.479: INFO: Created: latency-svc-5j7vs
May  1 16:31:18.530: INFO: Got endpoints: latency-svc-5j7vs [2.35187364s]
May  1 16:31:18.533: INFO: Created: latency-svc-ffxzc
May  1 16:31:18.562: INFO: Got endpoints: latency-svc-ffxzc [2.318135899s]
May  1 16:31:18.616: INFO: Created: latency-svc-kdlp9
May  1 16:31:18.668: INFO: Got endpoints: latency-svc-kdlp9 [2.375738051s]
May  1 16:31:18.690: INFO: Created: latency-svc-8dgnl
May  1 16:31:18.704: INFO: Got endpoints: latency-svc-8dgnl [2.3688315s]
May  1 16:31:18.756: INFO: Created: latency-svc-klgck
May  1 16:31:18.806: INFO: Got endpoints: latency-svc-klgck [2.368278106s]
May  1 16:31:18.837: INFO: Created: latency-svc-54l5p
May  1 16:31:18.874: INFO: Got endpoints: latency-svc-54l5p [2.203473873s]
May  1 16:31:18.959: INFO: Created: latency-svc-95877
May  1 16:31:18.976: INFO: Got endpoints: latency-svc-95877 [2.18768721s]
May  1 16:31:19.081: INFO: Created: latency-svc-p824x
May  1 16:31:19.126: INFO: Got endpoints: latency-svc-p824x [2.322454949s]
May  1 16:31:19.127: INFO: Created: latency-svc-ctn9z
May  1 16:31:19.157: INFO: Got endpoints: latency-svc-ctn9z [2.173673858s]
May  1 16:31:19.157: INFO: Latencies: [87.119825ms 140.852672ms 185.830914ms 224.560219ms 260.852052ms 368.61255ms 423.883068ms 494.76802ms 547.326047ms 592.470371ms 687.832045ms 815.318204ms 886.343847ms 928.761521ms 952.614632ms 955.520554ms 968.453154ms 992.798881ms 998.541821ms 1.00835261s 1.01263594s 1.014450531s 1.017080674s 1.017840458s 1.028551206s 1.03466853s 1.044670501s 1.04746629s 1.053943498s 1.059552841s 1.076568413s 1.094542872s 1.098745873s 1.117675721s 1.118159464s 1.17608582s 1.192534081s 1.204816786s 1.208054873s 1.268305001s 1.270796247s 1.272058137s 1.318097539s 1.352871308s 1.364563437s 1.366119336s 1.366549683s 1.389073647s 1.406579327s 1.417494449s 1.506029949s 1.643583127s 1.667282397s 1.681948802s 1.702656009s 1.744955779s 1.766088942s 1.809530115s 1.822095191s 1.891076136s 1.896994681s 1.899940856s 1.931173882s 1.953299438s 1.955226439s 1.965881374s 1.972671443s 1.975288334s 1.976441635s 1.977145189s 1.980072194s 1.999720555s 2.011680475s 2.023790324s 2.061003822s 2.062990417s 2.064666279s 2.065019551s 2.079295572s 2.083691749s 2.106408586s 2.118907161s 2.127643262s 2.129544406s 2.136945562s 2.147921989s 2.163187411s 2.173673858s 2.17483032s 2.183012674s 2.186158075s 2.18768721s 2.19163313s 2.197427573s 2.198604761s 2.203473873s 2.210712806s 2.212024088s 2.22671725s 2.231461317s 2.243958957s 2.250979412s 2.251161057s 2.273284778s 2.275453924s 2.287625217s 2.318135899s 2.322454949s 2.326332576s 2.331608313s 2.35187364s 2.352565291s 2.359860218s 2.368278106s 2.3688315s 2.369089634s 2.375738051s 2.379153138s 2.385542334s 2.449780344s 2.491469837s 2.494890129s 2.509078162s 2.513319194s 2.596004962s 2.625405535s 2.626971929s 2.663147053s 2.693509852s 2.720453055s 2.742074299s 2.805250482s 2.806249897s 2.807562571s 2.812154933s 2.862870902s 2.891925075s 2.918007168s 3.086236646s 3.202378185s 3.207839706s 3.257141801s 3.257253556s 3.323995175s 3.371566053s 3.535365473s 3.542409812s 3.548426813s 3.552588593s 3.556418535s 3.706766299s 3.712577356s 3.807607728s 3.813584362s 3.97775454s 4.038331686s 4.129411693s 4.178433746s 4.203149608s 4.22162198s 4.222136428s 4.231150418s 4.270372455s 4.325611282s 4.329603533s 4.438661535s 4.443378867s 4.460964508s 4.504900161s 4.507965358s 4.514596689s 4.542260977s 4.577472049s 4.613043825s 4.624958543s 4.650248646s 4.666009966s 4.675579945s 4.714684484s 4.731527325s 4.809201784s 4.906292499s 4.969859996s 4.980195306s 5.008038228s 5.13475646s 5.142225462s 5.145085119s 5.181099984s 5.223954352s 5.236644415s 5.31767004s 5.327381436s 5.566998728s 5.574621689s 5.653994085s 5.694016053s 5.733910699s 5.861290274s 6.110031513s]
May  1 16:31:19.157: INFO: 50 %ile: 2.243958957s
May  1 16:31:19.157: INFO: 90 %ile: 4.809201784s
May  1 16:31:19.157: INFO: 99 %ile: 5.861290274s
May  1 16:31:19.157: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:31:19.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-3707" for this suite.

• [SLOW TEST:39.764 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":275,"completed":222,"skipped":3688,"failed":0}
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:31:19.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  1 16:31:19.459: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f79457df-aab6-44bc-99fc-92f8aae54999" in namespace "downward-api-363" to be "Succeeded or Failed"
May  1 16:31:19.488: INFO: Pod "downwardapi-volume-f79457df-aab6-44bc-99fc-92f8aae54999": Phase="Pending", Reason="", readiness=false. Elapsed: 28.958489ms
May  1 16:31:21.771: INFO: Pod "downwardapi-volume-f79457df-aab6-44bc-99fc-92f8aae54999": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312756573s
May  1 16:31:23.943: INFO: Pod "downwardapi-volume-f79457df-aab6-44bc-99fc-92f8aae54999": Phase="Running", Reason="", readiness=true. Elapsed: 4.484428257s
May  1 16:31:26.010: INFO: Pod "downwardapi-volume-f79457df-aab6-44bc-99fc-92f8aae54999": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.551071095s
STEP: Saw pod success
May  1 16:31:26.010: INFO: Pod "downwardapi-volume-f79457df-aab6-44bc-99fc-92f8aae54999" satisfied condition "Succeeded or Failed"
May  1 16:31:26.014: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-f79457df-aab6-44bc-99fc-92f8aae54999 container client-container: 
STEP: delete the pod
May  1 16:31:26.163: INFO: Waiting for pod downwardapi-volume-f79457df-aab6-44bc-99fc-92f8aae54999 to disappear
May  1 16:31:26.195: INFO: Pod downwardapi-volume-f79457df-aab6-44bc-99fc-92f8aae54999 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:31:26.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-363" for this suite.

• [SLOW TEST:7.013 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3688,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:31:26.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
May  1 16:31:26.316: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix468438007/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:31:26.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3571" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":224,"skipped":3789,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:31:26.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  1 16:31:27.792: INFO: Creating deployment "test-recreate-deployment"
May  1 16:31:28.471: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
May  1 16:31:29.584: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
May  1 16:31:32.365: INFO: Waiting deployment "test-recreate-deployment" to complete
May  1 16:31:33.054: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947489, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947489, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947491, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947489, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:31:35.200: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947489, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947489, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947491, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947489, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:31:37.445: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
May  1 16:31:37.514: INFO: Updating deployment test-recreate-deployment
May  1 16:31:37.514: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May  1 16:31:39.436: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-6727 /apis/apps/v1/namespaces/deployment-6727/deployments/test-recreate-deployment 86d560cc-f5e2-4a8e-b638-cbd2f1b1f7c6 674438 2 2020-05-01 16:31:27 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-01 16:31:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-01 16:31:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005696988  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-01 16:31:38 +0000 UTC,LastTransitionTime:2020-05-01 16:31:38 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-01 16:31:38 +0000 UTC,LastTransitionTime:2020-05-01 16:31:29 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

May  1 16:31:40.029: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-6727 /apis/apps/v1/namespaces/deployment-6727/replicasets/test-recreate-deployment-d5667d9c7 c57018ef-6986-4eec-a8cc-12271c22b91d 674436 1 2020-05-01 16:31:38 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 86d560cc-f5e2-4a8e-b638-cbd2f1b1f7c6 0xc0056971e0 0xc0056971e1}] []  [{kube-controller-manager Update apps/v1 2020-05-01 16:31:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 54 100 53 54 48 99 99 45 102 53 101 50 45 52 97 56 101 45 98 54 51 56 45 99 98 100 50 102 49 98 49 102 55 99 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005697308  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  1 16:31:40.029: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
May  1 16:31:40.029: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c  deployment-6727 /apis/apps/v1/namespaces/deployment-6727/replicasets/test-recreate-deployment-74d98b5f7c 1dfb909b-6314-4f1e-93a7-fe5910e7ce7a 674423 2 2020-05-01 16:31:28 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 86d560cc-f5e2-4a8e-b638-cbd2f1b1f7c6 0xc005696fd7 0xc005696fd8}] []  [{kube-controller-manager Update apps/v1 2020-05-01 16:31:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 54 100 53 54 48 99 99 45 102 53 101 50 45 52 97 56 101 45 98 54 51 56 45 99 98 100 50 102 49 98 49 102 55 99 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0056970f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  1 16:31:40.465: INFO: Pod "test-recreate-deployment-d5667d9c7-fp7md" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-fp7md test-recreate-deployment-d5667d9c7- deployment-6727 /api/v1/namespaces/deployment-6727/pods/test-recreate-deployment-d5667d9c7-fp7md 989ba194-4ebc-46dc-a7f6-e9f973ab39ca 674441 0 2020-05-01 16:31:38 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 c57018ef-6986-4eec-a8cc-12271c22b91d 0xc0056d7cf0 0xc0056d7cf1}] []  [{kube-controller-manager Update v1 2020-05-01 16:31:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 53 55 48 49 56 101 102 45 54 57 56 54 45 52 101 101 99 45 97 56 99 99 45 49 50 50 55 49 99 50 50 98 57 49 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 16:31:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hzvxt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hzvxt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hzvxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:31:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:31:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:31:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:31:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-01 16:31:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:31:40.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6727" for this suite.

• [SLOW TEST:15.050 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":225,"skipped":3811,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:31:41.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:32:01.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6111" for this suite.

• [SLOW TEST:19.880 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":226,"skipped":3817,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:32:01.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  1 16:32:02.221: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98f4906a-ff7d-4411-85f1-6f207e323ab3" in namespace "downward-api-5819" to be "Succeeded or Failed"
May  1 16:32:02.374: INFO: Pod "downwardapi-volume-98f4906a-ff7d-4411-85f1-6f207e323ab3": Phase="Pending", Reason="", readiness=false. Elapsed: 152.513049ms
May  1 16:32:04.681: INFO: Pod "downwardapi-volume-98f4906a-ff7d-4411-85f1-6f207e323ab3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.459723254s
May  1 16:32:07.346: INFO: Pod "downwardapi-volume-98f4906a-ff7d-4411-85f1-6f207e323ab3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.12424297s
May  1 16:32:09.665: INFO: Pod "downwardapi-volume-98f4906a-ff7d-4411-85f1-6f207e323ab3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.443449638s
STEP: Saw pod success
May  1 16:32:09.665: INFO: Pod "downwardapi-volume-98f4906a-ff7d-4411-85f1-6f207e323ab3" satisfied condition "Succeeded or Failed"
May  1 16:32:09.668: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-98f4906a-ff7d-4411-85f1-6f207e323ab3 container client-container: 
STEP: delete the pod
May  1 16:32:10.272: INFO: Waiting for pod downwardapi-volume-98f4906a-ff7d-4411-85f1-6f207e323ab3 to disappear
May  1 16:32:10.368: INFO: Pod downwardapi-volume-98f4906a-ff7d-4411-85f1-6f207e323ab3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:32:10.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5819" for this suite.

• [SLOW TEST:8.973 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":3864,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:32:10.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  1 16:32:11.085: INFO: Pod name cleanup-pod: Found 0 pods out of 1
May  1 16:32:16.184: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May  1 16:32:16.184: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May  1 16:32:24.790: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-7841 /apis/apps/v1/namespaces/deployment-7841/deployments/test-cleanup-deployment 3c2714a0-9959-4192-903c-01208061aeb7 675111 1 2020-05-01 16:32:16 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  [{e2e.test Update apps/v1 2020-05-01 16:32:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-01 16:32:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c9ab78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-01 16:32:16 +0000 UTC,LastTransitionTime:2020-05-01 16:32:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-b4867b47f" has successfully progressed.,LastUpdateTime:2020-05-01 16:32:21 +0000 UTC,LastTransitionTime:2020-05-01 16:32:16 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

May  1 16:32:24.793: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f  deployment-7841 /apis/apps/v1/namespaces/deployment-7841/replicasets/test-cleanup-deployment-b4867b47f e0eadfcf-c499-4830-bee7-6f670f710e4c 675094 1 2020-05-01 16:32:16 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 3c2714a0-9959-4192-903c-01208061aeb7 0xc00350f8d0 0xc00350f8d1}] []  [{kube-controller-manager Update apps/v1 2020-05-01 16:32:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 99 50 55 49 52 97 48 45 57 57 53 57 45 52 49 57 50 45 57 48 51 99 45 48 49 50 48 56 48 54 49 97 101 98 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00350f958  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May  1 16:32:24.962: INFO: Pod "test-cleanup-deployment-b4867b47f-47h7s" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-47h7s test-cleanup-deployment-b4867b47f- deployment-7841 /api/v1/namespaces/deployment-7841/pods/test-cleanup-deployment-b4867b47f-47h7s 8cf75ded-e2e0-4e77-a99f-fb6b0b0df1ec 675092 0 2020-05-01 16:32:16 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f e0eadfcf-c499-4830-bee7-6f670f710e4c 0xc00350fee0 0xc00350fee1}] []  [{kube-controller-manager Update v1 2020-05-01 16:32:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 48 101 97 100 102 99 102 45 99 52 57 57 45 52 56 51 48 45 98 101 101 55 45 54 102 54 55 48 102 55 49 48 101 52 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 16:32:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 56 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vpxwb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vpxwb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vpxwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:32:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:32:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:32:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.85,StartTime:2020-05-01 16:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 16:32:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://ddf40dc0ff2f892fb02c462f2e49da1f2f0e308f92a5f13589599fcd0e5cd294,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.85,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:32:24.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7841" for this suite.

• [SLOW TEST:14.692 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":228,"skipped":3908,"failed":0}
SS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:32:25.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
May  1 16:32:25.396: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7271 /api/v1/namespaces/watch-7271/configmaps/e2e-watch-test-resource-version 9caf7e22-23cd-48de-a43e-5a59ea6935fc 675156 0 2020-05-01 16:32:25 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-01 16:32:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May  1 16:32:25.396: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7271 /api/v1/namespaces/watch-7271/configmaps/e2e-watch-test-resource-version 9caf7e22-23cd-48de-a43e-5a59ea6935fc 675158 0 2020-05-01 16:32:25 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-01 16:32:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:32:25.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7271" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":229,"skipped":3910,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:32:25.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-4edc784d-5b18-48e5-82aa-77b4df552a21 in namespace container-probe-4698
May  1 16:32:32.364: INFO: Started pod test-webserver-4edc784d-5b18-48e5-82aa-77b4df552a21 in namespace container-probe-4698
STEP: checking the pod's current state and verifying that restartCount is present
May  1 16:32:32.377: INFO: Initial restart count of pod test-webserver-4edc784d-5b18-48e5-82aa-77b4df552a21 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:36:33.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4698" for this suite.

• [SLOW TEST:248.411 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3943,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:36:33.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May  1 16:36:38.784: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:36:39.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9191" for this suite.

• [SLOW TEST:5.861 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":3957,"failed":0}
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:36:39.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-77c201c3-195e-4844-9103-7c4d6a4f5595
STEP: Creating a pod to test consume configMaps
May  1 16:36:40.226: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3a3b3853-e27f-42de-af15-2f6bf4a1484f" in namespace "projected-8565" to be "Succeeded or Failed"
May  1 16:36:40.294: INFO: Pod "pod-projected-configmaps-3a3b3853-e27f-42de-af15-2f6bf4a1484f": Phase="Pending", Reason="", readiness=false. Elapsed: 67.660907ms
May  1 16:36:42.297: INFO: Pod "pod-projected-configmaps-3a3b3853-e27f-42de-af15-2f6bf4a1484f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071622466s
May  1 16:36:44.373: INFO: Pod "pod-projected-configmaps-3a3b3853-e27f-42de-af15-2f6bf4a1484f": Phase="Running", Reason="", readiness=true. Elapsed: 4.147381272s
May  1 16:36:46.449: INFO: Pod "pod-projected-configmaps-3a3b3853-e27f-42de-af15-2f6bf4a1484f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.223392466s
STEP: Saw pod success
May  1 16:36:46.449: INFO: Pod "pod-projected-configmaps-3a3b3853-e27f-42de-af15-2f6bf4a1484f" satisfied condition "Succeeded or Failed"
May  1 16:36:46.452: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-3a3b3853-e27f-42de-af15-2f6bf4a1484f container projected-configmap-volume-test: 
STEP: delete the pod
May  1 16:36:46.738: INFO: Waiting for pod pod-projected-configmaps-3a3b3853-e27f-42de-af15-2f6bf4a1484f to disappear
May  1 16:36:46.802: INFO: Pod pod-projected-configmaps-3a3b3853-e27f-42de-af15-2f6bf4a1484f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:36:46.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8565" for this suite.

• [SLOW TEST:7.120 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":3957,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:36:46.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  1 16:36:47.838: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  1 16:36:49.845: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947807, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947807, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947808, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947807, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:36:51.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947807, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947807, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947808, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947807, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  1 16:36:55.062: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:36:55.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4545" for this suite.
STEP: Destroying namespace "webhook-4545-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.211 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":233,"skipped":3966,"failed":0}
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:36:56.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May  1 16:36:56.180: INFO: Waiting up to 5m0s for pod "downward-api-c70b1bb3-ccd9-4caf-8b43-c35aa082ff76" in namespace "downward-api-3424" to be "Succeeded or Failed"
May  1 16:36:56.366: INFO: Pod "downward-api-c70b1bb3-ccd9-4caf-8b43-c35aa082ff76": Phase="Pending", Reason="", readiness=false. Elapsed: 185.498946ms
May  1 16:36:58.550: INFO: Pod "downward-api-c70b1bb3-ccd9-4caf-8b43-c35aa082ff76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.369353118s
May  1 16:37:00.554: INFO: Pod "downward-api-c70b1bb3-ccd9-4caf-8b43-c35aa082ff76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.373739077s
May  1 16:37:02.844: INFO: Pod "downward-api-c70b1bb3-ccd9-4caf-8b43-c35aa082ff76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.663969934s
STEP: Saw pod success
May  1 16:37:02.844: INFO: Pod "downward-api-c70b1bb3-ccd9-4caf-8b43-c35aa082ff76" satisfied condition "Succeeded or Failed"
May  1 16:37:03.060: INFO: Trying to get logs from node kali-worker2 pod downward-api-c70b1bb3-ccd9-4caf-8b43-c35aa082ff76 container dapi-container: 
STEP: delete the pod
May  1 16:37:03.157: INFO: Waiting for pod downward-api-c70b1bb3-ccd9-4caf-8b43-c35aa082ff76 to disappear
May  1 16:37:03.209: INFO: Pod downward-api-c70b1bb3-ccd9-4caf-8b43-c35aa082ff76 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:37:03.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3424" for this suite.

• [SLOW TEST:7.381 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":3975,"failed":0}
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:37:03.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-6391
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  1 16:37:03.972: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  1 16:37:04.419: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  1 16:37:06.424: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  1 16:37:08.557: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  1 16:37:10.491: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 16:37:12.424: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 16:37:14.437: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 16:37:16.434: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 16:37:18.539: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 16:37:20.423: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 16:37:22.423: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 16:37:24.423: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 16:37:26.423: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  1 16:37:26.429: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  1 16:37:32.579: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.89:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6391 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 16:37:32.579: INFO: >>> kubeConfig: /root/.kube/config
I0501 16:37:32.647545       7 log.go:172] (0xc002a74d10) (0xc000c46460) Create stream
I0501 16:37:32.647582       7 log.go:172] (0xc002a74d10) (0xc000c46460) Stream added, broadcasting: 1
I0501 16:37:32.650109       7 log.go:172] (0xc002a74d10) Reply frame received for 1
I0501 16:37:32.650148       7 log.go:172] (0xc002a74d10) (0xc000c46500) Create stream
I0501 16:37:32.650155       7 log.go:172] (0xc002a74d10) (0xc000c46500) Stream added, broadcasting: 3
I0501 16:37:32.651092       7 log.go:172] (0xc002a74d10) Reply frame received for 3
I0501 16:37:32.651114       7 log.go:172] (0xc002a74d10) (0xc000c465a0) Create stream
I0501 16:37:32.651125       7 log.go:172] (0xc002a74d10) (0xc000c465a0) Stream added, broadcasting: 5
I0501 16:37:32.652061       7 log.go:172] (0xc002a74d10) Reply frame received for 5
I0501 16:37:32.756025       7 log.go:172] (0xc002a74d10) Data frame received for 5
I0501 16:37:32.756054       7 log.go:172] (0xc000c465a0) (5) Data frame handling
I0501 16:37:32.756090       7 log.go:172] (0xc002a74d10) Data frame received for 3
I0501 16:37:32.756103       7 log.go:172] (0xc000c46500) (3) Data frame handling
I0501 16:37:32.756115       7 log.go:172] (0xc000c46500) (3) Data frame sent
I0501 16:37:32.756124       7 log.go:172] (0xc002a74d10) Data frame received for 3
I0501 16:37:32.756133       7 log.go:172] (0xc000c46500) (3) Data frame handling
I0501 16:37:32.764687       7 log.go:172] (0xc002a74d10) Data frame received for 1
I0501 16:37:32.764713       7 log.go:172] (0xc000c46460) (1) Data frame handling
I0501 16:37:32.764724       7 log.go:172] (0xc000c46460) (1) Data frame sent
I0501 16:37:32.764736       7 log.go:172] (0xc002a74d10) (0xc000c46460) Stream removed, broadcasting: 1
I0501 16:37:32.764752       7 log.go:172] (0xc002a74d10) Go away received
I0501 16:37:32.764953       7 log.go:172] (0xc002a74d10) (0xc000c46460) Stream removed, broadcasting: 1
I0501 16:37:32.764988       7 log.go:172] (0xc002a74d10) (0xc000c46500) Stream removed, broadcasting: 3
I0501 16:37:32.765003       7 log.go:172] (0xc002a74d10) (0xc000c465a0) Stream removed, broadcasting: 5
May  1 16:37:32.765: INFO: Found all expected endpoints: [netserver-0]
May  1 16:37:32.767: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.56:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6391 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 16:37:32.768: INFO: >>> kubeConfig: /root/.kube/config
I0501 16:37:32.794185       7 log.go:172] (0xc002b94bb0) (0xc0011f4820) Create stream
I0501 16:37:32.794228       7 log.go:172] (0xc002b94bb0) (0xc0011f4820) Stream added, broadcasting: 1
I0501 16:37:32.796962       7 log.go:172] (0xc002b94bb0) Reply frame received for 1
I0501 16:37:32.797001       7 log.go:172] (0xc002b94bb0) (0xc001168d20) Create stream
I0501 16:37:32.797017       7 log.go:172] (0xc002b94bb0) (0xc001168d20) Stream added, broadcasting: 3
I0501 16:37:32.798182       7 log.go:172] (0xc002b94bb0) Reply frame received for 3
I0501 16:37:32.798210       7 log.go:172] (0xc002b94bb0) (0xc00110e1e0) Create stream
I0501 16:37:32.798218       7 log.go:172] (0xc002b94bb0) (0xc00110e1e0) Stream added, broadcasting: 5
I0501 16:37:32.799163       7 log.go:172] (0xc002b94bb0) Reply frame received for 5
I0501 16:37:32.858703       7 log.go:172] (0xc002b94bb0) Data frame received for 3
I0501 16:37:32.858737       7 log.go:172] (0xc001168d20) (3) Data frame handling
I0501 16:37:32.858754       7 log.go:172] (0xc001168d20) (3) Data frame sent
I0501 16:37:32.858761       7 log.go:172] (0xc002b94bb0) Data frame received for 3
I0501 16:37:32.858770       7 log.go:172] (0xc001168d20) (3) Data frame handling
I0501 16:37:32.859127       7 log.go:172] (0xc002b94bb0) Data frame received for 5
I0501 16:37:32.859150       7 log.go:172] (0xc00110e1e0) (5) Data frame handling
I0501 16:37:32.860650       7 log.go:172] (0xc002b94bb0) Data frame received for 1
I0501 16:37:32.860679       7 log.go:172] (0xc0011f4820) (1) Data frame handling
I0501 16:37:32.860697       7 log.go:172] (0xc0011f4820) (1) Data frame sent
I0501 16:37:32.860713       7 log.go:172] (0xc002b94bb0) (0xc0011f4820) Stream removed, broadcasting: 1
I0501 16:37:32.860730       7 log.go:172] (0xc002b94bb0) Go away received
I0501 16:37:32.860878       7 log.go:172] (0xc002b94bb0) (0xc0011f4820) Stream removed, broadcasting: 1
I0501 16:37:32.860901       7 log.go:172] (0xc002b94bb0) (0xc001168d20) Stream removed, broadcasting: 3
I0501 16:37:32.860920       7 log.go:172] (0xc002b94bb0) (0xc00110e1e0) Stream removed, broadcasting: 5
May  1 16:37:32.860: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:37:32.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6391" for this suite.

• [SLOW TEST:29.464 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":3975,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:37:32.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  1 16:37:32.979: INFO: Pod name rollover-pod: Found 0 pods out of 1
May  1 16:37:38.163: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May  1 16:37:38.163: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
May  1 16:37:40.341: INFO: Creating deployment "test-rollover-deployment"
May  1 16:37:40.497: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
May  1 16:37:43.095: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
May  1 16:37:43.102: INFO: Ensure that both replica sets have 1 created replica
May  1 16:37:43.129: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
May  1 16:37:43.196: INFO: Updating deployment test-rollover-deployment
May  1 16:37:43.196: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
May  1 16:37:45.294: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
May  1 16:37:45.656: INFO: Make sure deployment "test-rollover-deployment" is complete
May  1 16:37:45.664: INFO: all replica sets need to contain the pod-template-hash label
May  1 16:37:45.665: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947863, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:37:47.673: INFO: all replica sets need to contain the pod-template-hash label
May  1 16:37:47.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947863, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:37:49.674: INFO: all replica sets need to contain the pod-template-hash label
May  1 16:37:49.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947868, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:37:51.672: INFO: all replica sets need to contain the pod-template-hash label
May  1 16:37:51.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947868, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:37:53.672: INFO: all replica sets need to contain the pod-template-hash label
May  1 16:37:53.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947868, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:37:55.671: INFO: all replica sets need to contain the pod-template-hash label
May  1 16:37:55.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947868, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:37:57.784: INFO: all replica sets need to contain the pod-template-hash label
May  1 16:37:57.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947868, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723947861, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:37:59.710: INFO: 
May  1 16:37:59.710: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May  1 16:37:59.718: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-6540 /apis/apps/v1/namespaces/deployment-6540/deployments/test-rollover-deployment 1402472d-e3a8-4bf7-ba2e-7a25da26fddf 676745 2 2020-05-01 16:37:40 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-01 16:37:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-01 16:37:58 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005743538  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-01 16:37:41 +0000 UTC,LastTransitionTime:2020-05-01 16:37:41 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-05-01 16:37:58 +0000 UTC,LastTransitionTime:2020-05-01 16:37:41 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

May  1 16:37:59.721: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-6540 /apis/apps/v1/namespaces/deployment-6540/replicasets/test-rollover-deployment-84f7f6f64b 04575a69-de44-4989-829c-ba903c15af20 676732 2 2020-05-01 16:37:43 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 1402472d-e3a8-4bf7-ba2e-7a25da26fddf 0xc00590e2e7 0xc00590e2e8}] []  [{kube-controller-manager Update apps/v1 2020-05-01 16:37:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 52 48 50 52 55 50 100 45 101 51 97 56 45 52 98 102 55 45 98 97 50 101 45 55 97 50 53 100 97 50 54 102 100 100 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00590e378  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May  1 16:37:59.721: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
May  1 16:37:59.722: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-6540 /apis/apps/v1/namespaces/deployment-6540/replicasets/test-rollover-controller 92129cf8-6b4c-487f-bb30-4f5da78b6d18 676743 2 2020-05-01 16:37:32 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 1402472d-e3a8-4bf7-ba2e-7a25da26fddf 0xc00590e0d7 0xc00590e0d8}] []  [{e2e.test Update apps/v1 2020-05-01 16:37:32 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-01 16:37:58 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 52 48 50 52 55 50 100 45 101 51 97 56 45 52 98 102 55 45 98 97 50 101 45 55 97 50 53 100 97 50 54 102 100 100 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00590e178  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  1 16:37:59.722: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-6540 /apis/apps/v1/namespaces/deployment-6540/replicasets/test-rollover-deployment-5686c4cfd5 2aa89a26-df25-4968-9c5d-a2c8183c183a 676680 2 2020-05-01 16:37:41 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 1402472d-e3a8-4bf7-ba2e-7a25da26fddf 0xc00590e1e7 0xc00590e1e8}] []  [{kube-controller-manager Update apps/v1 2020-05-01 16:37:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 52 48 50 52 55 50 100 45 101 51 97 56 45 52 98 102 55 45 98 97 50 101 45 55 97 50 53 100 97 50 54 102 100 100 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00590e278  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  1 16:37:59.724: INFO: Pod "test-rollover-deployment-84f7f6f64b-tjfjr" is available:
&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-tjfjr test-rollover-deployment-84f7f6f64b- deployment-6540 /api/v1/namespaces/deployment-6540/pods/test-rollover-deployment-84f7f6f64b-tjfjr 640c9124-9f37-4b90-a568-d94995726bcc 676702 0 2020-05-01 16:37:43 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b 04575a69-de44-4989-829c-ba903c15af20 0xc00590e937 0xc00590e938}] []  [{kube-controller-manager Update v1 2020-05-01 16:37:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 52 53 55 53 97 54 57 45 100 101 52 52 45 52 57 56 57 45 56 50 57 99 45 98 97 57 48 51 99 49 53 97 102 50 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-01 16:37:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 53 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bdjmj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bdjmj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bdjmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:37:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:37:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:37:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-01 16:37:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.59,StartTime:2020-05-01 16:37:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-01 16:37:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://53ae53347500be6b2c98230fda36df255291b4ff7cd3bf84156bac5d8a63cfcc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.59,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:37:59.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6540" for this suite.

• [SLOW TEST:26.862 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":236,"skipped":4054,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:37:59.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-82922eab-a31c-468a-b96f-0c47840f7024
STEP: Creating a pod to test consume configMaps
May  1 16:38:00.142: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-31ffabdb-1ca8-4913-a3df-8823d68cb458" in namespace "projected-9451" to be "Succeeded or Failed"
May  1 16:38:00.246: INFO: Pod "pod-projected-configmaps-31ffabdb-1ca8-4913-a3df-8823d68cb458": Phase="Pending", Reason="", readiness=false. Elapsed: 103.9139ms
May  1 16:38:02.252: INFO: Pod "pod-projected-configmaps-31ffabdb-1ca8-4913-a3df-8823d68cb458": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1097579s
May  1 16:38:04.314: INFO: Pod "pod-projected-configmaps-31ffabdb-1ca8-4913-a3df-8823d68cb458": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172051641s
May  1 16:38:06.549: INFO: Pod "pod-projected-configmaps-31ffabdb-1ca8-4913-a3df-8823d68cb458": Phase="Pending", Reason="", readiness=false. Elapsed: 6.407006411s
May  1 16:38:09.061: INFO: Pod "pod-projected-configmaps-31ffabdb-1ca8-4913-a3df-8823d68cb458": Phase="Pending", Reason="", readiness=false. Elapsed: 8.918260035s
May  1 16:38:11.093: INFO: Pod "pod-projected-configmaps-31ffabdb-1ca8-4913-a3df-8823d68cb458": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.950264258s
STEP: Saw pod success
May  1 16:38:11.093: INFO: Pod "pod-projected-configmaps-31ffabdb-1ca8-4913-a3df-8823d68cb458" satisfied condition "Succeeded or Failed"
May  1 16:38:11.116: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-31ffabdb-1ca8-4913-a3df-8823d68cb458 container projected-configmap-volume-test: 
STEP: delete the pod
May  1 16:38:12.512: INFO: Waiting for pod pod-projected-configmaps-31ffabdb-1ca8-4913-a3df-8823d68cb458 to disappear
May  1 16:38:12.613: INFO: Pod pod-projected-configmaps-31ffabdb-1ca8-4913-a3df-8823d68cb458 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:38:12.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9451" for this suite.

• [SLOW TEST:12.949 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":4065,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:38:12.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  1 16:38:14.093: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-b446f6fd-7ba5-4b0f-ab08-cc5801f987c3" in namespace "security-context-test-2936" to be "Succeeded or Failed"
May  1 16:38:14.377: INFO: Pod "busybox-privileged-false-b446f6fd-7ba5-4b0f-ab08-cc5801f987c3": Phase="Pending", Reason="", readiness=false. Elapsed: 283.581574ms
May  1 16:38:16.384: INFO: Pod "busybox-privileged-false-b446f6fd-7ba5-4b0f-ab08-cc5801f987c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.29083942s
May  1 16:38:18.469: INFO: Pod "busybox-privileged-false-b446f6fd-7ba5-4b0f-ab08-cc5801f987c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.37558144s
May  1 16:38:20.472: INFO: Pod "busybox-privileged-false-b446f6fd-7ba5-4b0f-ab08-cc5801f987c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.378953885s
May  1 16:38:20.472: INFO: Pod "busybox-privileged-false-b446f6fd-7ba5-4b0f-ab08-cc5801f987c3" satisfied condition "Succeeded or Failed"
May  1 16:38:20.478: INFO: Got logs for pod "busybox-privileged-false-b446f6fd-7ba5-4b0f-ab08-cc5801f987c3": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:38:20.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2936" for this suite.

• [SLOW TEST:7.804 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":4082,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:38:20.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  1 16:38:20.654: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
May  1 16:38:20.707: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:20.749: INFO: Number of nodes with available pods: 0
May  1 16:38:20.749: INFO: Node kali-worker is running more than one daemon pod
May  1 16:38:21.894: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:22.025: INFO: Number of nodes with available pods: 0
May  1 16:38:22.025: INFO: Node kali-worker is running more than one daemon pod
May  1 16:38:22.777: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:23.314: INFO: Number of nodes with available pods: 0
May  1 16:38:23.314: INFO: Node kali-worker is running more than one daemon pod
May  1 16:38:24.049: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:24.062: INFO: Number of nodes with available pods: 0
May  1 16:38:24.062: INFO: Node kali-worker is running more than one daemon pod
May  1 16:38:24.788: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:24.818: INFO: Number of nodes with available pods: 0
May  1 16:38:24.818: INFO: Node kali-worker is running more than one daemon pod
May  1 16:38:25.754: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:25.758: INFO: Number of nodes with available pods: 1
May  1 16:38:25.758: INFO: Node kali-worker is running more than one daemon pod
May  1 16:38:27.014: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:27.018: INFO: Number of nodes with available pods: 2
May  1 16:38:27.018: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
May  1 16:38:28.020: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:28.020: INFO: Wrong image for pod: daemon-set-bgmx9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:28.112: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:29.333: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:29.333: INFO: Wrong image for pod: daemon-set-bgmx9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:29.382: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:30.117: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:30.117: INFO: Wrong image for pod: daemon-set-bgmx9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:30.117: INFO: Pod daemon-set-bgmx9 is not available
May  1 16:38:30.122: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:31.158: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:31.158: INFO: Wrong image for pod: daemon-set-bgmx9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:31.158: INFO: Pod daemon-set-bgmx9 is not available
May  1 16:38:31.231: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:32.117: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:32.117: INFO: Wrong image for pod: daemon-set-bgmx9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:32.117: INFO: Pod daemon-set-bgmx9 is not available
May  1 16:38:32.121: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:33.116: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:33.117: INFO: Wrong image for pod: daemon-set-bgmx9. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:33.117: INFO: Pod daemon-set-bgmx9 is not available
May  1 16:38:33.120: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:34.155: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:34.155: INFO: Pod daemon-set-w6dxx is not available
May  1 16:38:34.159: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:35.118: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:35.118: INFO: Pod daemon-set-w6dxx is not available
May  1 16:38:35.122: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:36.117: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:36.117: INFO: Pod daemon-set-w6dxx is not available
May  1 16:38:36.120: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:37.118: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:37.118: INFO: Pod daemon-set-w6dxx is not available
May  1 16:38:37.123: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:38.117: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:38.117: INFO: Pod daemon-set-w6dxx is not available
May  1 16:38:38.121: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:39.177: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:39.177: INFO: Pod daemon-set-w6dxx is not available
May  1 16:38:39.325: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:40.235: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:40.469: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:41.248: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:41.432: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:42.408: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:42.412: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:43.428: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:43.428: INFO: Pod daemon-set-6t8fw is not available
May  1 16:38:43.555: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:44.235: INFO: Wrong image for pod: daemon-set-6t8fw. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  1 16:38:44.235: INFO: Pod daemon-set-6t8fw is not available
May  1 16:38:44.316: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:45.117: INFO: Pod daemon-set-bt2rw is not available
May  1 16:38:45.121: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
May  1 16:38:45.125: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:45.128: INFO: Number of nodes with available pods: 1
May  1 16:38:45.128: INFO: Node kali-worker2 is running more than one daemon pod
May  1 16:38:46.259: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:46.264: INFO: Number of nodes with available pods: 1
May  1 16:38:46.264: INFO: Node kali-worker2 is running more than one daemon pod
May  1 16:38:47.133: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:47.137: INFO: Number of nodes with available pods: 1
May  1 16:38:47.137: INFO: Node kali-worker2 is running more than one daemon pod
May  1 16:38:48.146: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:48.150: INFO: Number of nodes with available pods: 1
May  1 16:38:48.150: INFO: Node kali-worker2 is running more than one daemon pod
May  1 16:38:49.152: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:38:49.156: INFO: Number of nodes with available pods: 2
May  1 16:38:49.156: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3175, will wait for the garbage collector to delete the pods
May  1 16:38:49.230: INFO: Deleting DaemonSet.extensions daemon-set took: 5.951769ms
May  1 16:38:49.530: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.239764ms
May  1 16:39:03.943: INFO: Number of nodes with available pods: 0
May  1 16:39:03.943: INFO: Number of running nodes: 0, number of available pods: 0
May  1 16:39:03.945: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3175/daemonsets","resourceVersion":"677081"},"items":null}

May  1 16:39:03.946: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3175/pods","resourceVersion":"677081"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:39:03.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3175" for this suite.

• [SLOW TEST:43.583 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":239,"skipped":4116,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:39:04.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-2ba25542-ce4c-476d-96a5-a79db87d2dad in namespace container-probe-9067
May  1 16:39:08.162: INFO: Started pod liveness-2ba25542-ce4c-476d-96a5-a79db87d2dad in namespace container-probe-9067
STEP: checking the pod's current state and verifying that restartCount is present
May  1 16:39:08.165: INFO: Initial restart count of pod liveness-2ba25542-ce4c-476d-96a5-a79db87d2dad is 0
May  1 16:39:28.210: INFO: Restart count of pod container-probe-9067/liveness-2ba25542-ce4c-476d-96a5-a79db87d2dad is now 1 (20.045162921s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:39:28.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9067" for this suite.

• [SLOW TEST:24.229 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4121,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:39:28.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:39:29.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6712" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":241,"skipped":4163,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:39:30.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service nodeport-test with type=NodePort in namespace services-8968
STEP: creating replication controller nodeport-test in namespace services-8968
I0501 16:39:33.071498       7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-8968, replica count: 2
I0501 16:39:36.121971       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 16:39:39.122180       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0501 16:39:42.122444       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  1 16:39:42.122: INFO: Creating new exec pod
May  1 16:39:47.149: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8968 execpodnwkm6 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
May  1 16:39:47.659: INFO: stderr: "I0501 16:39:47.582737    3396 log.go:172] (0xc000c246e0) (0xc000c1a500) Create stream\nI0501 16:39:47.582816    3396 log.go:172] (0xc000c246e0) (0xc000c1a500) Stream added, broadcasting: 1\nI0501 16:39:47.588933    3396 log.go:172] (0xc000c246e0) Reply frame received for 1\nI0501 16:39:47.588982    3396 log.go:172] (0xc000c246e0) (0xc000bc8320) Create stream\nI0501 16:39:47.589006    3396 log.go:172] (0xc000c246e0) (0xc000bc8320) Stream added, broadcasting: 3\nI0501 16:39:47.590262    3396 log.go:172] (0xc000c246e0) Reply frame received for 3\nI0501 16:39:47.590321    3396 log.go:172] (0xc000c246e0) (0xc000c1a5a0) Create stream\nI0501 16:39:47.590337    3396 log.go:172] (0xc000c246e0) (0xc000c1a5a0) Stream added, broadcasting: 5\nI0501 16:39:47.591198    3396 log.go:172] (0xc000c246e0) Reply frame received for 5\nI0501 16:39:47.651076    3396 log.go:172] (0xc000c246e0) Data frame received for 5\nI0501 16:39:47.651130    3396 log.go:172] (0xc000c1a5a0) (5) Data frame handling\nI0501 16:39:47.651168    3396 log.go:172] (0xc000c1a5a0) (5) Data frame sent\nI0501 16:39:47.651189    3396 log.go:172] (0xc000c246e0) Data frame received for 5\nI0501 16:39:47.651207    3396 log.go:172] (0xc000c1a5a0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0501 16:39:47.651249    3396 log.go:172] (0xc000c1a5a0) (5) Data frame sent\nI0501 16:39:47.651306    3396 log.go:172] (0xc000c246e0) Data frame received for 3\nI0501 16:39:47.651343    3396 log.go:172] (0xc000bc8320) (3) Data frame handling\nI0501 16:39:47.651385    3396 log.go:172] (0xc000c246e0) Data frame received for 5\nI0501 16:39:47.651410    3396 log.go:172] (0xc000c1a5a0) (5) Data frame handling\nI0501 16:39:47.652825    3396 log.go:172] (0xc000c246e0) Data frame received for 1\nI0501 16:39:47.652846    3396 log.go:172] (0xc000c1a500) (1) Data frame handling\nI0501 16:39:47.652874    3396 log.go:172] (0xc000c1a500) (1) Data frame sent\nI0501 16:39:47.652905    3396 log.go:172] (0xc000c246e0) (0xc000c1a500) Stream removed, broadcasting: 1\nI0501 16:39:47.652931    3396 log.go:172] (0xc000c246e0) Go away received\nI0501 16:39:47.653664    3396 log.go:172] (0xc000c246e0) (0xc000c1a500) Stream removed, broadcasting: 1\nI0501 16:39:47.653706    3396 log.go:172] (0xc000c246e0) (0xc000bc8320) Stream removed, broadcasting: 3\nI0501 16:39:47.653733    3396 log.go:172] (0xc000c246e0) (0xc000c1a5a0) Stream removed, broadcasting: 5\n"
May  1 16:39:47.660: INFO: stdout: ""
May  1 16:39:47.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8968 execpodnwkm6 -- /bin/sh -x -c nc -zv -t -w 2 10.111.219.38 80'
May  1 16:39:48.000: INFO: stderr: "I0501 16:39:47.921752    3415 log.go:172] (0xc000b16d10) (0xc000a8e3c0) Create stream\nI0501 16:39:47.921833    3415 log.go:172] (0xc000b16d10) (0xc000a8e3c0) Stream added, broadcasting: 1\nI0501 16:39:47.924661    3415 log.go:172] (0xc000b16d10) Reply frame received for 1\nI0501 16:39:47.924703    3415 log.go:172] (0xc000b16d10) (0xc000af41e0) Create stream\nI0501 16:39:47.924717    3415 log.go:172] (0xc000b16d10) (0xc000af41e0) Stream added, broadcasting: 3\nI0501 16:39:47.926166    3415 log.go:172] (0xc000b16d10) Reply frame received for 3\nI0501 16:39:47.926233    3415 log.go:172] (0xc000b16d10) (0xc0009c8140) Create stream\nI0501 16:39:47.926262    3415 log.go:172] (0xc000b16d10) (0xc0009c8140) Stream added, broadcasting: 5\nI0501 16:39:47.927251    3415 log.go:172] (0xc000b16d10) Reply frame received for 5\nI0501 16:39:47.992963    3415 log.go:172] (0xc000b16d10) Data frame received for 3\nI0501 16:39:47.993017    3415 log.go:172] (0xc000af41e0) (3) Data frame handling\nI0501 16:39:47.993051    3415 log.go:172] (0xc000b16d10) Data frame received for 5\nI0501 16:39:47.993067    3415 log.go:172] (0xc0009c8140) (5) Data frame handling\nI0501 16:39:47.993099    3415 log.go:172] (0xc0009c8140) (5) Data frame sent\nI0501 16:39:47.993311    3415 log.go:172] (0xc000b16d10) Data frame received for 5\nI0501 16:39:47.993336    3415 log.go:172] (0xc0009c8140) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.219.38 80\nConnection to 10.111.219.38 80 port [tcp/http] succeeded!\nI0501 16:39:47.995044    3415 log.go:172] (0xc000b16d10) Data frame received for 1\nI0501 16:39:47.995063    3415 log.go:172] (0xc000a8e3c0) (1) Data frame handling\nI0501 16:39:47.995071    3415 log.go:172] (0xc000a8e3c0) (1) Data frame sent\nI0501 16:39:47.995079    3415 log.go:172] (0xc000b16d10) (0xc000a8e3c0) Stream removed, broadcasting: 1\nI0501 16:39:47.995088    3415 log.go:172] (0xc000b16d10) Go away received\nI0501 16:39:47.995632    3415 log.go:172] (0xc000b16d10) (0xc000a8e3c0) Stream removed, broadcasting: 1\nI0501 16:39:47.995662    3415 log.go:172] (0xc000b16d10) (0xc000af41e0) Stream removed, broadcasting: 3\nI0501 16:39:47.995679    3415 log.go:172] (0xc000b16d10) (0xc0009c8140) Stream removed, broadcasting: 5\n"
May  1 16:39:48.000: INFO: stdout: ""
May  1 16:39:48.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8968 execpodnwkm6 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 32606'
May  1 16:39:48.315: INFO: stderr: "I0501 16:39:48.229019    3436 log.go:172] (0xc00099ea50) (0xc00084c5a0) Create stream\nI0501 16:39:48.229085    3436 log.go:172] (0xc00099ea50) (0xc00084c5a0) Stream added, broadcasting: 1\nI0501 16:39:48.234093    3436 log.go:172] (0xc00099ea50) Reply frame received for 1\nI0501 16:39:48.234141    3436 log.go:172] (0xc00099ea50) (0xc00065d5e0) Create stream\nI0501 16:39:48.234154    3436 log.go:172] (0xc00099ea50) (0xc00065d5e0) Stream added, broadcasting: 3\nI0501 16:39:48.234964    3436 log.go:172] (0xc00099ea50) Reply frame received for 3\nI0501 16:39:48.234991    3436 log.go:172] (0xc00099ea50) (0xc000500a00) Create stream\nI0501 16:39:48.235000    3436 log.go:172] (0xc00099ea50) (0xc000500a00) Stream added, broadcasting: 5\nI0501 16:39:48.235832    3436 log.go:172] (0xc00099ea50) Reply frame received for 5\nI0501 16:39:48.308676    3436 log.go:172] (0xc00099ea50) Data frame received for 5\nI0501 16:39:48.308712    3436 log.go:172] (0xc000500a00) (5) Data frame handling\nI0501 16:39:48.308744    3436 log.go:172] (0xc000500a00) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.15 32606\nConnection to 172.17.0.15 32606 port [tcp/32606] succeeded!\nI0501 16:39:48.308862    3436 log.go:172] (0xc00099ea50) Data frame received for 3\nI0501 16:39:48.308896    3436 log.go:172] (0xc00065d5e0) (3) Data frame handling\nI0501 16:39:48.309447    3436 log.go:172] (0xc00099ea50) Data frame received for 5\nI0501 16:39:48.309480    3436 log.go:172] (0xc000500a00) (5) Data frame handling\nI0501 16:39:48.310787    3436 log.go:172] (0xc00099ea50) Data frame received for 1\nI0501 16:39:48.310801    3436 log.go:172] (0xc00084c5a0) (1) Data frame handling\nI0501 16:39:48.310809    3436 log.go:172] (0xc00084c5a0) (1) Data frame sent\nI0501 16:39:48.310818    3436 log.go:172] (0xc00099ea50) (0xc00084c5a0) Stream removed, broadcasting: 1\nI0501 16:39:48.310842    3436 log.go:172] (0xc00099ea50) Go away received\nI0501 16:39:48.311136    3436 log.go:172] (0xc00099ea50) (0xc00084c5a0) Stream removed, broadcasting: 1\nI0501 16:39:48.311163    3436 log.go:172] (0xc00099ea50) (0xc00065d5e0) Stream removed, broadcasting: 3\nI0501 16:39:48.311176    3436 log.go:172] (0xc00099ea50) (0xc000500a00) Stream removed, broadcasting: 5\n"
May  1 16:39:48.315: INFO: stdout: ""
May  1 16:39:48.315: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8968 execpodnwkm6 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 32606'
May  1 16:39:48.520: INFO: stderr: "I0501 16:39:48.446266    3458 log.go:172] (0xc000988000) (0xc000567720) Create stream\nI0501 16:39:48.446315    3458 log.go:172] (0xc000988000) (0xc000567720) Stream added, broadcasting: 1\nI0501 16:39:48.448430    3458 log.go:172] (0xc000988000) Reply frame received for 1\nI0501 16:39:48.448482    3458 log.go:172] (0xc000988000) (0xc0005677c0) Create stream\nI0501 16:39:48.448497    3458 log.go:172] (0xc000988000) (0xc0005677c0) Stream added, broadcasting: 3\nI0501 16:39:48.449257    3458 log.go:172] (0xc000988000) Reply frame received for 3\nI0501 16:39:48.449281    3458 log.go:172] (0xc000988000) (0xc000840000) Create stream\nI0501 16:39:48.449289    3458 log.go:172] (0xc000988000) (0xc000840000) Stream added, broadcasting: 5\nI0501 16:39:48.449911    3458 log.go:172] (0xc000988000) Reply frame received for 5\nI0501 16:39:48.512770    3458 log.go:172] (0xc000988000) Data frame received for 5\nI0501 16:39:48.512807    3458 log.go:172] (0xc000840000) (5) Data frame handling\nI0501 16:39:48.512830    3458 log.go:172] (0xc000840000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.18 32606\nConnection to 172.17.0.18 32606 port [tcp/32606] succeeded!\nI0501 16:39:48.512901    3458 log.go:172] (0xc000988000) Data frame received for 3\nI0501 16:39:48.512932    3458 log.go:172] (0xc0005677c0) (3) Data frame handling\nI0501 16:39:48.512961    3458 log.go:172] (0xc000988000) Data frame received for 5\nI0501 16:39:48.512976    3458 log.go:172] (0xc000840000) (5) Data frame handling\nI0501 16:39:48.514400    3458 log.go:172] (0xc000988000) Data frame received for 1\nI0501 16:39:48.514417    3458 log.go:172] (0xc000567720) (1) Data frame handling\nI0501 16:39:48.514432    3458 log.go:172] (0xc000567720) (1) Data frame sent\nI0501 16:39:48.514442    3458 log.go:172] (0xc000988000) (0xc000567720) Stream removed, broadcasting: 1\nI0501 16:39:48.514457    3458 log.go:172] (0xc000988000) Go away received\nI0501 16:39:48.514969    3458 log.go:172] (0xc000988000) (0xc000567720) Stream removed, broadcasting: 1\nI0501 16:39:48.514995    3458 log.go:172] (0xc000988000) (0xc0005677c0) Stream removed, broadcasting: 3\nI0501 16:39:48.515009    3458 log.go:172] (0xc000988000) (0xc000840000) Stream removed, broadcasting: 5\n"
May  1 16:39:48.520: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:39:48.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8968" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:18.455 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":242,"skipped":4289,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:39:48.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
May  1 16:39:48.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
May  1 16:39:59.997: INFO: >>> kubeConfig: /root/.kube/config
May  1 16:40:02.947: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:40:13.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9708" for this suite.

• [SLOW TEST:25.212 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":243,"skipped":4318,"failed":0}
SS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:40:13.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  1 16:40:14.245: INFO: Creating ReplicaSet my-hostname-basic-d1a7c31a-ff7a-49da-a2c6-bbe3e000df9e
May  1 16:40:14.891: INFO: Pod name my-hostname-basic-d1a7c31a-ff7a-49da-a2c6-bbe3e000df9e: Found 0 pods out of 1
May  1 16:40:19.895: INFO: Pod name my-hostname-basic-d1a7c31a-ff7a-49da-a2c6-bbe3e000df9e: Found 1 pods out of 1
May  1 16:40:19.895: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d1a7c31a-ff7a-49da-a2c6-bbe3e000df9e" is running
May  1 16:40:21.905: INFO: Pod "my-hostname-basic-d1a7c31a-ff7a-49da-a2c6-bbe3e000df9e-qklx9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 16:40:15 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 16:40:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d1a7c31a-ff7a-49da-a2c6-bbe3e000df9e]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 16:40:15 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d1a7c31a-ff7a-49da-a2c6-bbe3e000df9e]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-01 16:40:14 +0000 UTC Reason: Message:}])
May  1 16:40:21.905: INFO: Trying to dial the pod
May  1 16:40:26.935: INFO: Controller my-hostname-basic-d1a7c31a-ff7a-49da-a2c6-bbe3e000df9e: Got expected result from replica 1 [my-hostname-basic-d1a7c31a-ff7a-49da-a2c6-bbe3e000df9e-qklx9]: "my-hostname-basic-d1a7c31a-ff7a-49da-a2c6-bbe3e000df9e-qklx9", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:40:26.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8322" for this suite.

• [SLOW TEST:13.199 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":244,"skipped":4320,"failed":0}
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:40:26.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:40:44.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1318" for this suite.

• [SLOW TEST:17.816 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":245,"skipped":4320,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:40:44.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-96a21822-6a66-476c-b3e4-de67b9039760
STEP: Creating a pod to test consume configMaps
May  1 16:40:45.438: INFO: Waiting up to 5m0s for pod "pod-configmaps-3b4610b8-941f-4a99-a603-7c53b72ede8f" in namespace "configmap-8431" to be "Succeeded or Failed"
May  1 16:40:45.468: INFO: Pod "pod-configmaps-3b4610b8-941f-4a99-a603-7c53b72ede8f": Phase="Pending", Reason="", readiness=false. Elapsed: 29.77092ms
May  1 16:40:47.470: INFO: Pod "pod-configmaps-3b4610b8-941f-4a99-a603-7c53b72ede8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032432793s
May  1 16:40:49.578: INFO: Pod "pod-configmaps-3b4610b8-941f-4a99-a603-7c53b72ede8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140277135s
May  1 16:40:51.582: INFO: Pod "pod-configmaps-3b4610b8-941f-4a99-a603-7c53b72ede8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.144235089s
STEP: Saw pod success
May  1 16:40:51.582: INFO: Pod "pod-configmaps-3b4610b8-941f-4a99-a603-7c53b72ede8f" satisfied condition "Succeeded or Failed"
May  1 16:40:51.585: INFO: Trying to get logs from node kali-worker pod pod-configmaps-3b4610b8-941f-4a99-a603-7c53b72ede8f container configmap-volume-test: 
STEP: delete the pod
May  1 16:40:52.076: INFO: Waiting for pod pod-configmaps-3b4610b8-941f-4a99-a603-7c53b72ede8f to disappear
May  1 16:40:52.084: INFO: Pod pod-configmaps-3b4610b8-941f-4a99-a603-7c53b72ede8f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:40:52.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8431" for this suite.

• [SLOW TEST:7.388 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4336,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:40:52.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service endpoint-test2 in namespace services-595
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-595 to expose endpoints map[]
May  1 16:40:52.707: INFO: Get endpoints failed (192.983325ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
May  1 16:40:53.711: INFO: successfully validated that service endpoint-test2 in namespace services-595 exposes endpoints map[] (1.196207578s elapsed)
STEP: Creating pod pod1 in namespace services-595
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-595 to expose endpoints map[pod1:[80]]
May  1 16:40:58.597: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.881005722s elapsed, will retry)
May  1 16:40:59.607: INFO: successfully validated that service endpoint-test2 in namespace services-595 exposes endpoints map[pod1:[80]] (5.890244525s elapsed)
STEP: Creating pod pod2 in namespace services-595
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-595 to expose endpoints map[pod1:[80] pod2:[80]]
May  1 16:41:03.783: INFO: successfully validated that service endpoint-test2 in namespace services-595 exposes endpoints map[pod1:[80] pod2:[80]] (4.172055435s elapsed)
STEP: Deleting pod pod1 in namespace services-595
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-595 to expose endpoints map[pod2:[80]]
May  1 16:41:04.861: INFO: successfully validated that service endpoint-test2 in namespace services-595 exposes endpoints map[pod2:[80]] (1.073620325s elapsed)
STEP: Deleting pod pod2 in namespace services-595
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-595 to expose endpoints map[]
May  1 16:41:06.271: INFO: successfully validated that service endpoint-test2 in namespace services-595 exposes endpoints map[] (1.406215041s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:41:07.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-595" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:15.177 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":275,"completed":247,"skipped":4348,"failed":0}
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:41:07.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8587.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8587.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8587.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8587.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8587.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8587.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8587.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8587.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8587.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8587.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  1 16:41:20.251: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:20.255: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:20.258: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:20.260: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:20.270: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:20.273: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:20.276: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:20.279: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:20.336: INFO: Lookups using dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8587.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8587.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local jessie_udp@dns-test-service-2.dns-8587.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8587.svc.cluster.local]

May  1 16:41:25.404: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:25.408: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:25.477: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:25.480: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:25.491: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:25.493: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:25.495: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:25.498: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:25.502: INFO: Lookups using dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8587.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8587.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local jessie_udp@dns-test-service-2.dns-8587.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8587.svc.cluster.local]

May  1 16:41:30.341: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:30.344: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:30.348: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:30.351: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:30.393: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:30.396: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:30.399: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:30.402: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:30.407: INFO: Lookups using dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8587.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8587.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local jessie_udp@dns-test-service-2.dns-8587.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8587.svc.cluster.local]

May  1 16:41:35.341: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:35.344: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:35.348: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:35.351: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:35.372: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:35.375: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:35.376: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:35.379: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:35.383: INFO: Lookups using dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8587.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8587.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local jessie_udp@dns-test-service-2.dns-8587.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8587.svc.cluster.local]

May  1 16:41:40.560: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:40.769: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:40.772: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:40.853: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:40.862: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:41.009: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:41.014: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:41.333: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:41.342: INFO: Lookups using dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8587.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8587.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local jessie_udp@dns-test-service-2.dns-8587.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8587.svc.cluster.local]

May  1 16:41:45.344: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local from pod dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8: the server could not find the requested resource (get pods dns-test-5689273a-4573-4438-8152-2aa0200eb6b8)
May  1 16:41:45.370: INFO: Lookups using dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8 failed for: [wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8587.svc.cluster.local]

May  1 16:41:50.373: INFO: DNS probes using dns-8587/dns-test-5689273a-4573-4438-8152-2aa0200eb6b8 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:41:50.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8587" for this suite.

• [SLOW TEST:43.966 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":248,"skipped":4356,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:41:51.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  1 16:41:51.404: INFO: Create a RollingUpdate DaemonSet
May  1 16:41:51.408: INFO: Check that daemon pods launch on every node of the cluster
May  1 16:41:51.451: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:41:51.463: INFO: Number of nodes with available pods: 0
May  1 16:41:51.463: INFO: Node kali-worker is running more than one daemon pod
May  1 16:41:52.470: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:41:52.473: INFO: Number of nodes with available pods: 0
May  1 16:41:52.473: INFO: Node kali-worker is running more than one daemon pod
May  1 16:41:53.507: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:41:53.757: INFO: Number of nodes with available pods: 0
May  1 16:41:53.757: INFO: Node kali-worker is running more than one daemon pod
May  1 16:41:54.559: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:41:54.563: INFO: Number of nodes with available pods: 0
May  1 16:41:54.563: INFO: Node kali-worker is running more than one daemon pod
May  1 16:41:55.468: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:41:55.473: INFO: Number of nodes with available pods: 0
May  1 16:41:55.473: INFO: Node kali-worker is running more than one daemon pod
May  1 16:41:56.615: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:41:56.618: INFO: Number of nodes with available pods: 0
May  1 16:41:56.618: INFO: Node kali-worker is running more than one daemon pod
May  1 16:41:57.513: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:41:57.573: INFO: Number of nodes with available pods: 0
May  1 16:41:57.573: INFO: Node kali-worker is running more than one daemon pod
May  1 16:41:58.468: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:41:58.471: INFO: Number of nodes with available pods: 1
May  1 16:41:58.471: INFO: Node kali-worker is running more than one daemon pod
May  1 16:41:59.468: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:41:59.471: INFO: Number of nodes with available pods: 2
May  1 16:41:59.471: INFO: Number of running nodes: 2, number of available pods: 2
May  1 16:41:59.471: INFO: Update the DaemonSet to trigger a rollout
May  1 16:41:59.479: INFO: Updating DaemonSet daemon-set
May  1 16:42:04.500: INFO: Roll back the DaemonSet before rollout is complete
May  1 16:42:04.566: INFO: Updating DaemonSet daemon-set
May  1 16:42:04.566: INFO: Make sure DaemonSet rollback is complete
May  1 16:42:04.605: INFO: Wrong image for pod: daemon-set-mvp5r. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  1 16:42:04.605: INFO: Pod daemon-set-mvp5r is not available
May  1 16:42:04.633: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:42:05.639: INFO: Wrong image for pod: daemon-set-mvp5r. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  1 16:42:05.639: INFO: Pod daemon-set-mvp5r is not available
May  1 16:42:05.643: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:42:06.854: INFO: Wrong image for pod: daemon-set-mvp5r. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  1 16:42:06.854: INFO: Pod daemon-set-mvp5r is not available
May  1 16:42:06.858: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:42:07.651: INFO: Wrong image for pod: daemon-set-mvp5r. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  1 16:42:07.651: INFO: Pod daemon-set-mvp5r is not available
May  1 16:42:07.655: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:42:08.638: INFO: Wrong image for pod: daemon-set-mvp5r. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  1 16:42:08.638: INFO: Pod daemon-set-mvp5r is not available
May  1 16:42:08.640: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:42:09.639: INFO: Wrong image for pod: daemon-set-mvp5r. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  1 16:42:09.639: INFO: Pod daemon-set-mvp5r is not available
May  1 16:42:09.644: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:42:12.559: INFO: Wrong image for pod: daemon-set-mvp5r. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  1 16:42:12.559: INFO: Pod daemon-set-mvp5r is not available
May  1 16:42:12.867: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:42:13.704: INFO: Wrong image for pod: daemon-set-mvp5r. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  1 16:42:13.704: INFO: Pod daemon-set-mvp5r is not available
May  1 16:42:13.742: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:42:14.734: INFO: Pod daemon-set-px64k is not available
May  1 16:42:14.738: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1274, will wait for the garbage collector to delete the pods
May  1 16:42:14.806: INFO: Deleting DaemonSet.extensions daemon-set took: 8.472632ms
May  1 16:42:15.506: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.194015ms
May  1 16:42:24.009: INFO: Number of nodes with available pods: 0
May  1 16:42:24.009: INFO: Number of running nodes: 0, number of available pods: 0
May  1 16:42:24.012: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1274/daemonsets","resourceVersion":"678107"},"items":null}

May  1 16:42:24.014: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1274/pods","resourceVersion":"678107"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:42:24.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1274" for this suite.

• [SLOW TEST:32.864 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":249,"skipped":4371,"failed":0}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:42:24.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May  1 16:42:29.562: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:42:29.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4028" for this suite.

• [SLOW TEST:5.492 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4374,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:42:29.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  1 16:42:30.431: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  1 16:42:32.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948150, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948150, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948150, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948150, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  1 16:42:34.587: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948150, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948150, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948150, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723948150, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  1 16:42:37.775: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:42:47.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3980" for this suite.
STEP: Destroying namespace "webhook-3980-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.415 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":251,"skipped":4380,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:42:48.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:42:48.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1802" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":275,"completed":252,"skipped":4388,"failed":0}

------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:42:48.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
May  1 16:43:05.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May  1 16:43:05.338: INFO: Pod pod-with-poststart-exec-hook still exists
May  1 16:43:07.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May  1 16:43:07.391: INFO: Pod pod-with-poststart-exec-hook still exists
May  1 16:43:09.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May  1 16:43:09.343: INFO: Pod pod-with-poststart-exec-hook still exists
May  1 16:43:11.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May  1 16:43:11.368: INFO: Pod pod-with-poststart-exec-hook still exists
May  1 16:43:13.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May  1 16:43:13.416: INFO: Pod pod-with-poststart-exec-hook still exists
May  1 16:43:15.339: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May  1 16:43:15.343: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:43:15.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-689" for this suite.

• [SLOW TEST:27.194 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4388,"failed":0}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:43:15.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:43:22.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6245" for this suite.

• [SLOW TEST:7.631 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":254,"skipped":4388,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:43:22.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May  1 16:43:27.905: INFO: Successfully updated pod "labelsupdatea6752694-bc17-48c4-9099-00e7aedf3766"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:43:30.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2379" for this suite.

• [SLOW TEST:7.058 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4396,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:43:30.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
May  1 16:43:30.398: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:43:43.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7946" for this suite.

• [SLOW TEST:13.801 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":256,"skipped":4436,"failed":0}
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:43:43.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
May  1 16:43:44.180: INFO: Waiting up to 5m0s for pod "client-containers-77cd3f45-6765-463f-816a-f931e68b5bc1" in namespace "containers-7018" to be "Succeeded or Failed"
May  1 16:43:44.345: INFO: Pod "client-containers-77cd3f45-6765-463f-816a-f931e68b5bc1": Phase="Pending", Reason="", readiness=false. Elapsed: 165.034518ms
May  1 16:43:46.598: INFO: Pod "client-containers-77cd3f45-6765-463f-816a-f931e68b5bc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.418303582s
May  1 16:43:49.075: INFO: Pod "client-containers-77cd3f45-6765-463f-816a-f931e68b5bc1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.894992274s
May  1 16:43:51.092: INFO: Pod "client-containers-77cd3f45-6765-463f-816a-f931e68b5bc1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.912217499s
May  1 16:43:53.096: INFO: Pod "client-containers-77cd3f45-6765-463f-816a-f931e68b5bc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.915857235s
STEP: Saw pod success
May  1 16:43:53.096: INFO: Pod "client-containers-77cd3f45-6765-463f-816a-f931e68b5bc1" satisfied condition "Succeeded or Failed"
May  1 16:43:53.098: INFO: Trying to get logs from node kali-worker2 pod client-containers-77cd3f45-6765-463f-816a-f931e68b5bc1 container test-container: 
STEP: delete the pod
May  1 16:43:53.275: INFO: Waiting for pod client-containers-77cd3f45-6765-463f-816a-f931e68b5bc1 to disappear
May  1 16:43:53.524: INFO: Pod client-containers-77cd3f45-6765-463f-816a-f931e68b5bc1 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:43:53.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7018" for this suite.

• [SLOW TEST:9.724 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4438,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:43:53.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:43:53.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3168" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":258,"skipped":4451,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:43:53.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-3960
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-3960
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3960
May  1 16:43:55.177: INFO: Found 0 stateful pods, waiting for 1
May  1 16:44:05.182: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
May  1 16:44:05.185: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3960 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  1 16:44:14.734: INFO: stderr: "I0501 16:44:14.579539    3476 log.go:172] (0xc0007fa790) (0xc000687900) Create stream\nI0501 16:44:14.579578    3476 log.go:172] (0xc0007fa790) (0xc000687900) Stream added, broadcasting: 1\nI0501 16:44:14.583391    3476 log.go:172] (0xc0007fa790) Reply frame received for 1\nI0501 16:44:14.583439    3476 log.go:172] (0xc0007fa790) (0xc0005bf720) Create stream\nI0501 16:44:14.583455    3476 log.go:172] (0xc0007fa790) (0xc0005bf720) Stream added, broadcasting: 3\nI0501 16:44:14.586734    3476 log.go:172] (0xc0007fa790) Reply frame received for 3\nI0501 16:44:14.586785    3476 log.go:172] (0xc0007fa790) (0xc000522b40) Create stream\nI0501 16:44:14.586821    3476 log.go:172] (0xc0007fa790) (0xc000522b40) Stream added, broadcasting: 5\nI0501 16:44:14.587609    3476 log.go:172] (0xc0007fa790) Reply frame received for 5\nI0501 16:44:14.652538    3476 log.go:172] (0xc0007fa790) Data frame received for 5\nI0501 16:44:14.652570    3476 log.go:172] (0xc000522b40) (5) Data frame handling\nI0501 16:44:14.652601    3476 log.go:172] (0xc000522b40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 16:44:14.726125    3476 log.go:172] (0xc0007fa790) Data frame received for 3\nI0501 16:44:14.726167    3476 log.go:172] (0xc0005bf720) (3) Data frame handling\nI0501 16:44:14.726189    3476 log.go:172] (0xc0005bf720) (3) Data frame sent\nI0501 16:44:14.726497    3476 log.go:172] (0xc0007fa790) Data frame received for 3\nI0501 16:44:14.726567    3476 log.go:172] (0xc0005bf720) (3) Data frame handling\nI0501 16:44:14.726595    3476 log.go:172] (0xc0007fa790) Data frame received for 5\nI0501 16:44:14.726611    3476 log.go:172] (0xc000522b40) (5) Data frame handling\nI0501 16:44:14.728034    3476 log.go:172] (0xc0007fa790) Data frame received for 1\nI0501 16:44:14.728068    3476 log.go:172] (0xc000687900) (1) Data frame handling\nI0501 16:44:14.728097    3476 log.go:172] (0xc000687900) (1) Data frame sent\nI0501 16:44:14.728119    3476 log.go:172] (0xc0007fa790) (0xc000687900) Stream removed, broadcasting: 1\nI0501 16:44:14.728144    3476 log.go:172] (0xc0007fa790) Go away received\nI0501 16:44:14.728739    3476 log.go:172] (0xc0007fa790) (0xc000687900) Stream removed, broadcasting: 1\nI0501 16:44:14.728763    3476 log.go:172] (0xc0007fa790) (0xc0005bf720) Stream removed, broadcasting: 3\nI0501 16:44:14.728787    3476 log.go:172] (0xc0007fa790) (0xc000522b40) Stream removed, broadcasting: 5\n"
May  1 16:44:14.734: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  1 16:44:14.734: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  1 16:44:14.737: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
May  1 16:44:24.742: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May  1 16:44:24.742: INFO: Waiting for statefulset status.replicas updated to 0
May  1 16:44:24.834: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999544s
May  1 16:44:25.839: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.918610545s
May  1 16:44:26.843: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.913956022s
May  1 16:44:27.878: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.90981337s
May  1 16:44:28.882: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.874801351s
May  1 16:44:30.255: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.870161509s
May  1 16:44:31.260: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.497463929s
May  1 16:44:32.264: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.492809917s
May  1 16:44:33.270: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.488286844s
May  1 16:44:34.284: INFO: Verifying statefulset ss doesn't scale past 1 for another 482.828584ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3960
May  1 16:44:35.288: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3960 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 16:44:35.490: INFO: stderr: "I0501 16:44:35.422978    3508 log.go:172] (0xc000a12000) (0xc0005e5720) Create stream\nI0501 16:44:35.423043    3508 log.go:172] (0xc000a12000) (0xc0005e5720) Stream added, broadcasting: 1\nI0501 16:44:35.426454    3508 log.go:172] (0xc000a12000) Reply frame received for 1\nI0501 16:44:35.426497    3508 log.go:172] (0xc000a12000) (0xc0004eeb40) Create stream\nI0501 16:44:35.426511    3508 log.go:172] (0xc000a12000) (0xc0004eeb40) Stream added, broadcasting: 3\nI0501 16:44:35.427512    3508 log.go:172] (0xc000a12000) Reply frame received for 3\nI0501 16:44:35.427546    3508 log.go:172] (0xc000a12000) (0xc000982000) Create stream\nI0501 16:44:35.427557    3508 log.go:172] (0xc000a12000) (0xc000982000) Stream added, broadcasting: 5\nI0501 16:44:35.428401    3508 log.go:172] (0xc000a12000) Reply frame received for 5\nI0501 16:44:35.483530    3508 log.go:172] (0xc000a12000) Data frame received for 5\nI0501 16:44:35.483563    3508 log.go:172] (0xc000982000) (5) Data frame handling\nI0501 16:44:35.483579    3508 log.go:172] (0xc000982000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0501 16:44:35.483824    3508 log.go:172] (0xc000a12000) Data frame received for 3\nI0501 16:44:35.483851    3508 log.go:172] (0xc0004eeb40) (3) Data frame handling\nI0501 16:44:35.483876    3508 log.go:172] (0xc0004eeb40) (3) Data frame sent\nI0501 16:44:35.483910    3508 log.go:172] (0xc000a12000) Data frame received for 3\nI0501 16:44:35.483922    3508 log.go:172] (0xc0004eeb40) (3) Data frame handling\nI0501 16:44:35.483989    3508 log.go:172] (0xc000a12000) Data frame received for 5\nI0501 16:44:35.484013    3508 log.go:172] (0xc000982000) (5) Data frame handling\nI0501 16:44:35.485237    3508 log.go:172] (0xc000a12000) Data frame received for 1\nI0501 16:44:35.485252    3508 log.go:172] (0xc0005e5720) (1) Data frame handling\nI0501 16:44:35.485259    3508 log.go:172] (0xc0005e5720) (1) Data frame sent\nI0501 16:44:35.485267    3508 log.go:172] (0xc000a12000) (0xc0005e5720) Stream removed, broadcasting: 1\nI0501 16:44:35.485274    3508 log.go:172] (0xc000a12000) Go away received\nI0501 16:44:35.485677    3508 log.go:172] (0xc000a12000) (0xc0005e5720) Stream removed, broadcasting: 1\nI0501 16:44:35.485705    3508 log.go:172] (0xc000a12000) (0xc0004eeb40) Stream removed, broadcasting: 3\nI0501 16:44:35.485719    3508 log.go:172] (0xc000a12000) (0xc000982000) Stream removed, broadcasting: 5\n"
May  1 16:44:35.490: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  1 16:44:35.490: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  1 16:44:35.493: INFO: Found 1 stateful pods, waiting for 3
May  1 16:44:45.500: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
May  1 16:44:45.500: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
May  1 16:44:45.500: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
May  1 16:44:45.507: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3960 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  1 16:44:45.709: INFO: stderr: "I0501 16:44:45.634818    3529 log.go:172] (0xc0000ea370) (0xc000a52000) Create stream\nI0501 16:44:45.634893    3529 log.go:172] (0xc0000ea370) (0xc000a52000) Stream added, broadcasting: 1\nI0501 16:44:45.636654    3529 log.go:172] (0xc0000ea370) Reply frame received for 1\nI0501 16:44:45.636701    3529 log.go:172] (0xc0000ea370) (0xc00056cb40) Create stream\nI0501 16:44:45.636715    3529 log.go:172] (0xc0000ea370) (0xc00056cb40) Stream added, broadcasting: 3\nI0501 16:44:45.637939    3529 log.go:172] (0xc0000ea370) Reply frame received for 3\nI0501 16:44:45.637970    3529 log.go:172] (0xc0000ea370) (0xc00081b2c0) Create stream\nI0501 16:44:45.637982    3529 log.go:172] (0xc0000ea370) (0xc00081b2c0) Stream added, broadcasting: 5\nI0501 16:44:45.638842    3529 log.go:172] (0xc0000ea370) Reply frame received for 5\nI0501 16:44:45.702121    3529 log.go:172] (0xc0000ea370) Data frame received for 3\nI0501 16:44:45.702174    3529 log.go:172] (0xc00056cb40) (3) Data frame handling\nI0501 16:44:45.702190    3529 log.go:172] (0xc00056cb40) (3) Data frame sent\nI0501 16:44:45.702208    3529 log.go:172] (0xc0000ea370) Data frame received for 3\nI0501 16:44:45.702225    3529 log.go:172] (0xc00056cb40) (3) Data frame handling\nI0501 16:44:45.702260    3529 log.go:172] (0xc0000ea370) Data frame received for 5\nI0501 16:44:45.702272    3529 log.go:172] (0xc00081b2c0) (5) Data frame handling\nI0501 16:44:45.702286    3529 log.go:172] (0xc00081b2c0) (5) Data frame sent\nI0501 16:44:45.702311    3529 log.go:172] (0xc0000ea370) Data frame received for 5\nI0501 16:44:45.702343    3529 log.go:172] (0xc00081b2c0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 16:44:45.704430    3529 log.go:172] (0xc0000ea370) Data frame received for 1\nI0501 16:44:45.704481    3529 log.go:172] (0xc000a52000) (1) Data frame handling\nI0501 16:44:45.704510    3529 log.go:172] (0xc000a52000) (1) Data frame sent\nI0501 16:44:45.704531    3529 log.go:172] (0xc0000ea370) (0xc000a52000) Stream removed, broadcasting: 1\nI0501 16:44:45.704544    3529 log.go:172] (0xc0000ea370) Go away received\nI0501 16:44:45.704972    3529 log.go:172] (0xc0000ea370) (0xc000a52000) Stream removed, broadcasting: 1\nI0501 16:44:45.704991    3529 log.go:172] (0xc0000ea370) (0xc00056cb40) Stream removed, broadcasting: 3\nI0501 16:44:45.704997    3529 log.go:172] (0xc0000ea370) (0xc00081b2c0) Stream removed, broadcasting: 5\n"
May  1 16:44:45.709: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  1 16:44:45.709: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  1 16:44:45.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3960 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  1 16:44:46.046: INFO: stderr: "I0501 16:44:45.925592    3550 log.go:172] (0xc00003b4a0) (0xc000c306e0) Create stream\nI0501 16:44:45.925656    3550 log.go:172] (0xc00003b4a0) (0xc000c306e0) Stream added, broadcasting: 1\nI0501 16:44:45.928504    3550 log.go:172] (0xc00003b4a0) Reply frame received for 1\nI0501 16:44:45.928549    3550 log.go:172] (0xc00003b4a0) (0xc000c30780) Create stream\nI0501 16:44:45.928567    3550 log.go:172] (0xc00003b4a0) (0xc000c30780) Stream added, broadcasting: 3\nI0501 16:44:45.929615    3550 log.go:172] (0xc00003b4a0) Reply frame received for 3\nI0501 16:44:45.929637    3550 log.go:172] (0xc00003b4a0) (0xc000bcc1e0) Create stream\nI0501 16:44:45.929645    3550 log.go:172] (0xc00003b4a0) (0xc000bcc1e0) Stream added, broadcasting: 5\nI0501 16:44:45.930593    3550 log.go:172] (0xc00003b4a0) Reply frame received for 5\nI0501 16:44:45.997497    3550 log.go:172] (0xc00003b4a0) Data frame received for 5\nI0501 16:44:45.997647    3550 log.go:172] (0xc000bcc1e0) (5) Data frame handling\nI0501 16:44:45.997692    3550 log.go:172] (0xc000bcc1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 16:44:46.038305    3550 log.go:172] (0xc00003b4a0) Data frame received for 3\nI0501 16:44:46.038400    3550 log.go:172] (0xc000c30780) (3) Data frame handling\nI0501 16:44:46.038424    3550 log.go:172] (0xc000c30780) (3) Data frame sent\nI0501 16:44:46.038436    3550 log.go:172] (0xc00003b4a0) Data frame received for 3\nI0501 16:44:46.038454    3550 log.go:172] (0xc000c30780) (3) Data frame handling\nI0501 16:44:46.038469    3550 log.go:172] (0xc00003b4a0) Data frame received for 5\nI0501 16:44:46.038477    3550 log.go:172] (0xc000bcc1e0) (5) Data frame handling\nI0501 16:44:46.040075    3550 log.go:172] (0xc00003b4a0) Data frame received for 1\nI0501 16:44:46.040093    3550 log.go:172] (0xc000c306e0) (1) Data frame handling\nI0501 16:44:46.040110    3550 log.go:172] (0xc000c306e0) (1) Data frame sent\nI0501 16:44:46.040119    3550 log.go:172] (0xc00003b4a0) (0xc000c306e0) Stream removed, broadcasting: 1\nI0501 16:44:46.040170    3550 log.go:172] (0xc00003b4a0) Go away received\nI0501 16:44:46.040918    3550 log.go:172] (0xc00003b4a0) (0xc000c306e0) Stream removed, broadcasting: 1\nI0501 16:44:46.040941    3550 log.go:172] (0xc00003b4a0) (0xc000c30780) Stream removed, broadcasting: 3\nI0501 16:44:46.040964    3550 log.go:172] (0xc00003b4a0) (0xc000bcc1e0) Stream removed, broadcasting: 5\n"
May  1 16:44:46.046: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  1 16:44:46.046: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  1 16:44:46.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3960 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  1 16:44:46.284: INFO: stderr: "I0501 16:44:46.173290    3570 log.go:172] (0xc0009168f0) (0xc000918140) Create stream\nI0501 16:44:46.173349    3570 log.go:172] (0xc0009168f0) (0xc000918140) Stream added, broadcasting: 1\nI0501 16:44:46.180861    3570 log.go:172] (0xc0009168f0) Reply frame received for 1\nI0501 16:44:46.180907    3570 log.go:172] (0xc0009168f0) (0xc000649360) Create stream\nI0501 16:44:46.180919    3570 log.go:172] (0xc0009168f0) (0xc000649360) Stream added, broadcasting: 3\nI0501 16:44:46.182453    3570 log.go:172] (0xc0009168f0) Reply frame received for 3\nI0501 16:44:46.182502    3570 log.go:172] (0xc0009168f0) (0xc0009da000) Create stream\nI0501 16:44:46.182518    3570 log.go:172] (0xc0009168f0) (0xc0009da000) Stream added, broadcasting: 5\nI0501 16:44:46.183365    3570 log.go:172] (0xc0009168f0) Reply frame received for 5\nI0501 16:44:46.245589    3570 log.go:172] (0xc0009168f0) Data frame received for 5\nI0501 16:44:46.245631    3570 log.go:172] (0xc0009da000) (5) Data frame handling\nI0501 16:44:46.245664    3570 log.go:172] (0xc0009da000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0501 16:44:46.278076    3570 log.go:172] (0xc0009168f0) Data frame received for 3\nI0501 16:44:46.278103    3570 log.go:172] (0xc000649360) (3) Data frame handling\nI0501 16:44:46.278123    3570 log.go:172] (0xc000649360) (3) Data frame sent\nI0501 16:44:46.278266    3570 log.go:172] (0xc0009168f0) Data frame received for 5\nI0501 16:44:46.278287    3570 log.go:172] (0xc0009da000) (5) Data frame handling\nI0501 16:44:46.278587    3570 log.go:172] (0xc0009168f0) Data frame received for 3\nI0501 16:44:46.278610    3570 log.go:172] (0xc000649360) (3) Data frame handling\nI0501 16:44:46.280391    3570 log.go:172] (0xc0009168f0) Data frame received for 1\nI0501 16:44:46.280407    3570 log.go:172] (0xc000918140) (1) Data frame handling\nI0501 16:44:46.280416    3570 log.go:172] (0xc000918140) (1) Data frame sent\nI0501 16:44:46.280425    3570 log.go:172] (0xc0009168f0) (0xc000918140) Stream removed, broadcasting: 1\nI0501 16:44:46.280658    3570 log.go:172] (0xc0009168f0) Go away received\nI0501 16:44:46.280687    3570 log.go:172] (0xc0009168f0) (0xc000918140) Stream removed, broadcasting: 1\nI0501 16:44:46.280703    3570 log.go:172] (0xc0009168f0) (0xc000649360) Stream removed, broadcasting: 3\nI0501 16:44:46.280713    3570 log.go:172] (0xc0009168f0) (0xc0009da000) Stream removed, broadcasting: 5\n"
May  1 16:44:46.284: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  1 16:44:46.284: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  1 16:44:46.284: INFO: Waiting for statefulset status.replicas updated to 0
May  1 16:44:46.287: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
May  1 16:44:56.294: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May  1 16:44:56.294: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
May  1 16:44:56.294: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
May  1 16:44:56.339: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999573s
May  1 16:44:57.344: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.96258893s
May  1 16:44:58.349: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.957558249s
May  1 16:44:59.531: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.952558102s
May  1 16:45:00.536: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.770832928s
May  1 16:45:01.541: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.765764334s
May  1 16:45:02.546: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.760356753s
May  1 16:45:03.552: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.7553766s
May  1 16:45:04.813: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.750106419s
May  1 16:45:05.817: INFO: Verifying statefulset ss doesn't scale past 3 for another 488.747375ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3960
May  1 16:45:06.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3960 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 16:45:07.093: INFO: stderr: "I0501 16:45:07.000126    3593 log.go:172] (0xc00003a6e0) (0xc000b0c1e0) Create stream\nI0501 16:45:07.000179    3593 log.go:172] (0xc00003a6e0) (0xc000b0c1e0) Stream added, broadcasting: 1\nI0501 16:45:07.002743    3593 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0501 16:45:07.002793    3593 log.go:172] (0xc00003a6e0) (0xc000af01e0) Create stream\nI0501 16:45:07.002808    3593 log.go:172] (0xc00003a6e0) (0xc000af01e0) Stream added, broadcasting: 3\nI0501 16:45:07.004054    3593 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0501 16:45:07.004121    3593 log.go:172] (0xc00003a6e0) (0xc000af0280) Create stream\nI0501 16:45:07.004144    3593 log.go:172] (0xc00003a6e0) (0xc000af0280) Stream added, broadcasting: 5\nI0501 16:45:07.005381    3593 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0501 16:45:07.084068    3593 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0501 16:45:07.084125    3593 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0501 16:45:07.084168    3593 log.go:172] (0xc000af0280) (5) Data frame handling\nI0501 16:45:07.084186    3593 log.go:172] (0xc000af0280) (5) Data frame sent\nI0501 16:45:07.084199    3593 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0501 16:45:07.084248    3593 log.go:172] (0xc000af0280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0501 16:45:07.084294    3593 log.go:172] (0xc000af01e0) (3) Data frame handling\nI0501 16:45:07.084327    3593 log.go:172] (0xc000af01e0) (3) Data frame sent\nI0501 16:45:07.084350    3593 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0501 16:45:07.084363    3593 log.go:172] (0xc000af01e0) (3) Data frame handling\nI0501 16:45:07.086651    3593 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0501 16:45:07.086709    3593 log.go:172] (0xc000b0c1e0) (1) Data frame handling\nI0501 16:45:07.086733    3593 log.go:172] (0xc000b0c1e0) (1) Data frame sent\nI0501 16:45:07.086755    3593 log.go:172] (0xc00003a6e0) (0xc000b0c1e0) Stream removed, broadcasting: 1\nI0501 16:45:07.086786    3593 log.go:172] (0xc00003a6e0) Go away received\nI0501 16:45:07.087786    3593 log.go:172] (0xc00003a6e0) (0xc000b0c1e0) Stream removed, broadcasting: 1\nI0501 16:45:07.087814    3593 log.go:172] (0xc00003a6e0) (0xc000af01e0) Stream removed, broadcasting: 3\nI0501 16:45:07.087825    3593 log.go:172] (0xc00003a6e0) (0xc000af0280) Stream removed, broadcasting: 5\n"
May  1 16:45:07.093: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  1 16:45:07.093: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  1 16:45:07.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3960 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 16:45:07.288: INFO: stderr: "I0501 16:45:07.220468    3613 log.go:172] (0xc00095e9a0) (0xc0006f15e0) Create stream\nI0501 16:45:07.220556    3613 log.go:172] (0xc00095e9a0) (0xc0006f15e0) Stream added, broadcasting: 1\nI0501 16:45:07.223461    3613 log.go:172] (0xc00095e9a0) Reply frame received for 1\nI0501 16:45:07.223504    3613 log.go:172] (0xc00095e9a0) (0xc0008e8000) Create stream\nI0501 16:45:07.223519    3613 log.go:172] (0xc00095e9a0) (0xc0008e8000) Stream added, broadcasting: 3\nI0501 16:45:07.224354    3613 log.go:172] (0xc00095e9a0) Reply frame received for 3\nI0501 16:45:07.224395    3613 log.go:172] (0xc00095e9a0) (0xc000518aa0) Create stream\nI0501 16:45:07.224407    3613 log.go:172] (0xc00095e9a0) (0xc000518aa0) Stream added, broadcasting: 5\nI0501 16:45:07.225293    3613 log.go:172] (0xc00095e9a0) Reply frame received for 5\nI0501 16:45:07.276989    3613 log.go:172] (0xc00095e9a0) Data frame received for 5\nI0501 16:45:07.277019    3613 log.go:172] (0xc000518aa0) (5) Data frame handling\nI0501 16:45:07.277040    3613 log.go:172] (0xc000518aa0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0501 16:45:07.280726    3613 log.go:172] (0xc00095e9a0) Data frame received for 3\nI0501 16:45:07.280766    3613 log.go:172] (0xc0008e8000) (3) Data frame handling\nI0501 16:45:07.280786    3613 log.go:172] (0xc0008e8000) (3) Data frame sent\nI0501 16:45:07.280948    3613 log.go:172] (0xc00095e9a0) Data frame received for 3\nI0501 16:45:07.280967    3613 log.go:172] (0xc0008e8000) (3) Data frame handling\nI0501 16:45:07.281034    3613 log.go:172] (0xc00095e9a0) Data frame received for 5\nI0501 16:45:07.281047    3613 log.go:172] (0xc000518aa0) (5) Data frame handling\nI0501 16:45:07.282674    3613 log.go:172] (0xc00095e9a0) Data frame received for 1\nI0501 16:45:07.282688    3613 log.go:172] (0xc0006f15e0) (1) Data frame handling\nI0501 16:45:07.282695    3613 log.go:172] (0xc0006f15e0) (1) Data frame sent\nI0501 16:45:07.282703    3613 log.go:172] (0xc00095e9a0) (0xc0006f15e0) Stream removed, broadcasting: 1\nI0501 16:45:07.282714    3613 log.go:172] (0xc00095e9a0) Go away received\nI0501 16:45:07.282996    3613 log.go:172] (0xc00095e9a0) (0xc0006f15e0) Stream removed, broadcasting: 1\nI0501 16:45:07.283010    3613 log.go:172] (0xc00095e9a0) (0xc0008e8000) Stream removed, broadcasting: 3\nI0501 16:45:07.283016    3613 log.go:172] (0xc00095e9a0) (0xc000518aa0) Stream removed, broadcasting: 5\n"
May  1 16:45:07.288: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  1 16:45:07.288: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  1 16:45:07.288: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3960 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  1 16:45:07.520: INFO: stderr: "I0501 16:45:07.438443    3635 log.go:172] (0xc000ada000) (0xc000bf20a0) Create stream\nI0501 16:45:07.438516    3635 log.go:172] (0xc000ada000) (0xc000bf20a0) Stream added, broadcasting: 1\nI0501 16:45:07.441394    3635 log.go:172] (0xc000ada000) Reply frame received for 1\nI0501 16:45:07.441430    3635 log.go:172] (0xc000ada000) (0xc000bb6500) Create stream\nI0501 16:45:07.441448    3635 log.go:172] (0xc000ada000) (0xc000bb6500) Stream added, broadcasting: 3\nI0501 16:45:07.442443    3635 log.go:172] (0xc000ada000) Reply frame received for 3\nI0501 16:45:07.442471    3635 log.go:172] (0xc000ada000) (0xc000bf2140) Create stream\nI0501 16:45:07.442481    3635 log.go:172] (0xc000ada000) (0xc000bf2140) Stream added, broadcasting: 5\nI0501 16:45:07.443547    3635 log.go:172] (0xc000ada000) Reply frame received for 5\nI0501 16:45:07.513065    3635 log.go:172] (0xc000ada000) Data frame received for 3\nI0501 16:45:07.513087    3635 log.go:172] (0xc000bb6500) (3) Data frame handling\nI0501 16:45:07.513100    3635 log.go:172] (0xc000bb6500) (3) Data frame sent\nI0501 16:45:07.513367    3635 log.go:172] (0xc000ada000) Data frame received for 3\nI0501 16:45:07.513404    3635 log.go:172] (0xc000bb6500) (3) Data frame handling\nI0501 16:45:07.513441    3635 log.go:172] (0xc000ada000) Data frame received for 5\nI0501 16:45:07.513463    3635 log.go:172] (0xc000bf2140) (5) Data frame handling\nI0501 16:45:07.513487    3635 log.go:172] (0xc000bf2140) (5) Data frame sent\nI0501 16:45:07.513503    3635 log.go:172] (0xc000ada000) Data frame received for 5\nI0501 16:45:07.513514    3635 log.go:172] (0xc000bf2140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0501 16:45:07.515086    3635 log.go:172] (0xc000ada000) Data frame received for 1\nI0501 16:45:07.515106    3635 log.go:172] (0xc000bf20a0) (1) Data frame handling\nI0501 16:45:07.515120    3635 log.go:172] (0xc000bf20a0) (1) Data frame sent\nI0501 16:45:07.515135    3635 log.go:172] (0xc000ada000) (0xc000bf20a0) Stream removed, broadcasting: 1\nI0501 16:45:07.515193    3635 log.go:172] (0xc000ada000) Go away received\nI0501 16:45:07.515576    3635 log.go:172] (0xc000ada000) (0xc000bf20a0) Stream removed, broadcasting: 1\nI0501 16:45:07.515598    3635 log.go:172] (0xc000ada000) (0xc000bb6500) Stream removed, broadcasting: 3\nI0501 16:45:07.515610    3635 log.go:172] (0xc000ada000) (0xc000bf2140) Stream removed, broadcasting: 5\n"
May  1 16:45:07.520: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  1 16:45:07.520: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  1 16:45:07.520: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May  1 16:45:27.537: INFO: Deleting all statefulset in ns statefulset-3960
May  1 16:45:27.540: INFO: Scaling statefulset ss to 0
May  1 16:45:27.573: INFO: Waiting for statefulset status.replicas updated to 0
May  1 16:45:27.575: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:45:27.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3960" for this suite.

• [SLOW TEST:93.867 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":259,"skipped":4459,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:45:27.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  1 16:45:28.241: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a01ab962-4936-407d-b777-5750a9e6060d" in namespace "downward-api-8391" to be "Succeeded or Failed"
May  1 16:45:28.381: INFO: Pod "downwardapi-volume-a01ab962-4936-407d-b777-5750a9e6060d": Phase="Pending", Reason="", readiness=false. Elapsed: 140.04142ms
May  1 16:45:30.386: INFO: Pod "downwardapi-volume-a01ab962-4936-407d-b777-5750a9e6060d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14523136s
May  1 16:45:32.400: INFO: Pod "downwardapi-volume-a01ab962-4936-407d-b777-5750a9e6060d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.159819722s
STEP: Saw pod success
May  1 16:45:32.400: INFO: Pod "downwardapi-volume-a01ab962-4936-407d-b777-5750a9e6060d" satisfied condition "Succeeded or Failed"
May  1 16:45:32.406: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-a01ab962-4936-407d-b777-5750a9e6060d container client-container: 
STEP: delete the pod
May  1 16:45:32.491: INFO: Waiting for pod downwardapi-volume-a01ab962-4936-407d-b777-5750a9e6060d to disappear
May  1 16:45:32.502: INFO: Pod downwardapi-volume-a01ab962-4936-407d-b777-5750a9e6060d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:45:32.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8391" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4470,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:45:32.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
May  1 16:45:51.011: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8735 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 16:45:51.011: INFO: >>> kubeConfig: /root/.kube/config
I0501 16:45:51.102492       7 log.go:172] (0xc002b94dc0) (0xc001a280a0) Create stream
I0501 16:45:51.102543       7 log.go:172] (0xc002b94dc0) (0xc001a280a0) Stream added, broadcasting: 1
I0501 16:45:51.105045       7 log.go:172] (0xc002b94dc0) Reply frame received for 1
I0501 16:45:51.105084       7 log.go:172] (0xc002b94dc0) (0xc001a28140) Create stream
I0501 16:45:51.105096       7 log.go:172] (0xc002b94dc0) (0xc001a28140) Stream added, broadcasting: 3
I0501 16:45:51.106469       7 log.go:172] (0xc002b94dc0) Reply frame received for 3
I0501 16:45:51.106523       7 log.go:172] (0xc002b94dc0) (0xc001e26820) Create stream
I0501 16:45:51.106541       7 log.go:172] (0xc002b94dc0) (0xc001e26820) Stream added, broadcasting: 5
I0501 16:45:51.107591       7 log.go:172] (0xc002b94dc0) Reply frame received for 5
I0501 16:45:51.177339       7 log.go:172] (0xc002b94dc0) Data frame received for 3
I0501 16:45:51.177402       7 log.go:172] (0xc001a28140) (3) Data frame handling
I0501 16:45:51.177438       7 log.go:172] (0xc001a28140) (3) Data frame sent
I0501 16:45:51.177459       7 log.go:172] (0xc002b94dc0) Data frame received for 3
I0501 16:45:51.177475       7 log.go:172] (0xc001a28140) (3) Data frame handling
I0501 16:45:51.177499       7 log.go:172] (0xc002b94dc0) Data frame received for 5
I0501 16:45:51.177515       7 log.go:172] (0xc001e26820) (5) Data frame handling
I0501 16:45:51.178963       7 log.go:172] (0xc002b94dc0) Data frame received for 1
I0501 16:45:51.178991       7 log.go:172] (0xc001a280a0) (1) Data frame handling
I0501 16:45:51.179007       7 log.go:172] (0xc001a280a0) (1) Data frame sent
I0501 16:45:51.179183       7 log.go:172] (0xc002b94dc0) (0xc001a280a0) Stream removed, broadcasting: 1
I0501 16:45:51.179221       7 log.go:172] (0xc002b94dc0) Go away received
I0501 16:45:51.179369       7 log.go:172] (0xc002b94dc0) (0xc001a280a0) Stream removed, broadcasting: 1
I0501 16:45:51.179404       7 log.go:172] (0xc002b94dc0) (0xc001a28140) Stream removed, broadcasting: 3
I0501 16:45:51.179484       7 log.go:172] (0xc002b94dc0) (0xc001e26820) Stream removed, broadcasting: 5
May  1 16:45:51.179: INFO: Exec stderr: ""
May  1 16:45:51.179: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8735 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 16:45:51.179: INFO: >>> kubeConfig: /root/.kube/config
I0501 16:45:51.211917       7 log.go:172] (0xc0052c29a0) (0xc001875cc0) Create stream
I0501 16:45:51.211958       7 log.go:172] (0xc0052c29a0) (0xc001875cc0) Stream added, broadcasting: 1
I0501 16:45:51.215484       7 log.go:172] (0xc0052c29a0) Reply frame received for 1
I0501 16:45:51.215539       7 log.go:172] (0xc0052c29a0) (0xc0016fadc0) Create stream
I0501 16:45:51.215555       7 log.go:172] (0xc0052c29a0) (0xc0016fadc0) Stream added, broadcasting: 3
I0501 16:45:51.217769       7 log.go:172] (0xc0052c29a0) Reply frame received for 3
I0501 16:45:51.217816       7 log.go:172] (0xc0052c29a0) (0xc0016fafa0) Create stream
I0501 16:45:51.217826       7 log.go:172] (0xc0052c29a0) (0xc0016fafa0) Stream added, broadcasting: 5
I0501 16:45:51.218808       7 log.go:172] (0xc0052c29a0) Reply frame received for 5
I0501 16:45:51.279191       7 log.go:172] (0xc0052c29a0) Data frame received for 5
I0501 16:45:51.279222       7 log.go:172] (0xc0016fafa0) (5) Data frame handling
I0501 16:45:51.279239       7 log.go:172] (0xc0052c29a0) Data frame received for 3
I0501 16:45:51.279246       7 log.go:172] (0xc0016fadc0) (3) Data frame handling
I0501 16:45:51.279257       7 log.go:172] (0xc0016fadc0) (3) Data frame sent
I0501 16:45:51.279264       7 log.go:172] (0xc0052c29a0) Data frame received for 3
I0501 16:45:51.279270       7 log.go:172] (0xc0016fadc0) (3) Data frame handling
I0501 16:45:51.280073       7 log.go:172] (0xc0052c29a0) Data frame received for 1
I0501 16:45:51.280100       7 log.go:172] (0xc001875cc0) (1) Data frame handling
I0501 16:45:51.280113       7 log.go:172] (0xc001875cc0) (1) Data frame sent
I0501 16:45:51.280125       7 log.go:172] (0xc0052c29a0) (0xc001875cc0) Stream removed, broadcasting: 1
I0501 16:45:51.280141       7 log.go:172] (0xc0052c29a0) Go away received
I0501 16:45:51.280245       7 log.go:172] (0xc0052c29a0) (0xc001875cc0) Stream removed, broadcasting: 1
I0501 16:45:51.280259       7 log.go:172] (0xc0052c29a0) (0xc0016fadc0) Stream removed, broadcasting: 3
I0501 16:45:51.280265       7 log.go:172] (0xc0052c29a0) (0xc0016fafa0) Stream removed, broadcasting: 5
May  1 16:45:51.280: INFO: Exec stderr: ""
May  1 16:45:51.280: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8735 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 16:45:51.280: INFO: >>> kubeConfig: /root/.kube/config
I0501 16:45:51.411459       7 log.go:172] (0xc002b953f0) (0xc001a28320) Create stream
I0501 16:45:51.411514       7 log.go:172] (0xc002b953f0) (0xc001a28320) Stream added, broadcasting: 1
I0501 16:45:51.414507       7 log.go:172] (0xc002b953f0) Reply frame received for 1
I0501 16:45:51.414556       7 log.go:172] (0xc002b953f0) (0xc001e26960) Create stream
I0501 16:45:51.414590       7 log.go:172] (0xc002b953f0) (0xc001e26960) Stream added, broadcasting: 3
I0501 16:45:51.415580       7 log.go:172] (0xc002b953f0) Reply frame received for 3
I0501 16:45:51.415620       7 log.go:172] (0xc002b953f0) (0xc001e26a00) Create stream
I0501 16:45:51.415636       7 log.go:172] (0xc002b953f0) (0xc001e26a00) Stream added, broadcasting: 5
I0501 16:45:51.416602       7 log.go:172] (0xc002b953f0) Reply frame received for 5
I0501 16:45:51.493945       7 log.go:172] (0xc002b953f0) Data frame received for 3
I0501 16:45:51.494015       7 log.go:172] (0xc001e26960) (3) Data frame handling
I0501 16:45:51.494034       7 log.go:172] (0xc001e26960) (3) Data frame sent
I0501 16:45:51.494070       7 log.go:172] (0xc002b953f0) Data frame received for 3
I0501 16:45:51.494108       7 log.go:172] (0xc002b953f0) Data frame received for 5
I0501 16:45:51.494149       7 log.go:172] (0xc001e26a00) (5) Data frame handling
I0501 16:45:51.494193       7 log.go:172] (0xc001e26960) (3) Data frame handling
I0501 16:45:51.495884       7 log.go:172] (0xc002b953f0) Data frame received for 1
I0501 16:45:51.495908       7 log.go:172] (0xc001a28320) (1) Data frame handling
I0501 16:45:51.495931       7 log.go:172] (0xc001a28320) (1) Data frame sent
I0501 16:45:51.495961       7 log.go:172] (0xc002b953f0) (0xc001a28320) Stream removed, broadcasting: 1
I0501 16:45:51.496045       7 log.go:172] (0xc002b953f0) Go away received
I0501 16:45:51.496099       7 log.go:172] (0xc002b953f0) (0xc001a28320) Stream removed, broadcasting: 1
I0501 16:45:51.496156       7 log.go:172] (0xc002b953f0) (0xc001e26960) Stream removed, broadcasting: 3
I0501 16:45:51.496184       7 log.go:172] (0xc002b953f0) (0xc001e26a00) Stream removed, broadcasting: 5
May  1 16:45:51.496: INFO: Exec stderr: ""
May  1 16:45:51.496: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8735 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 16:45:51.496: INFO: >>> kubeConfig: /root/.kube/config
I0501 16:45:51.531662       7 log.go:172] (0xc002b95970) (0xc001a285a0) Create stream
I0501 16:45:51.531694       7 log.go:172] (0xc002b95970) (0xc001a285a0) Stream added, broadcasting: 1
I0501 16:45:51.534456       7 log.go:172] (0xc002b95970) Reply frame received for 1
I0501 16:45:51.534500       7 log.go:172] (0xc002b95970) (0xc001a286e0) Create stream
I0501 16:45:51.534517       7 log.go:172] (0xc002b95970) (0xc001a286e0) Stream added, broadcasting: 3
I0501 16:45:51.535602       7 log.go:172] (0xc002b95970) Reply frame received for 3
I0501 16:45:51.535634       7 log.go:172] (0xc002b95970) (0xc001a28780) Create stream
I0501 16:45:51.535645       7 log.go:172] (0xc002b95970) (0xc001a28780) Stream added, broadcasting: 5
I0501 16:45:51.536495       7 log.go:172] (0xc002b95970) Reply frame received for 5
I0501 16:45:51.595259       7 log.go:172] (0xc002b95970) Data frame received for 3
I0501 16:45:51.595311       7 log.go:172] (0xc001a286e0) (3) Data frame handling
I0501 16:45:51.595328       7 log.go:172] (0xc001a286e0) (3) Data frame sent
I0501 16:45:51.595343       7 log.go:172] (0xc002b95970) Data frame received for 3
I0501 16:45:51.595368       7 log.go:172] (0xc001a286e0) (3) Data frame handling
I0501 16:45:51.595403       7 log.go:172] (0xc002b95970) Data frame received for 5
I0501 16:45:51.595434       7 log.go:172] (0xc001a28780) (5) Data frame handling
I0501 16:45:51.596974       7 log.go:172] (0xc002b95970) Data frame received for 1
I0501 16:45:51.596993       7 log.go:172] (0xc001a285a0) (1) Data frame handling
I0501 16:45:51.597020       7 log.go:172] (0xc001a285a0) (1) Data frame sent
I0501 16:45:51.597311       7 log.go:172] (0xc002b95970) (0xc001a285a0) Stream removed, broadcasting: 1
I0501 16:45:51.597366       7 log.go:172] (0xc002b95970) Go away received
I0501 16:45:51.597474       7 log.go:172] (0xc002b95970) (0xc001a285a0) Stream removed, broadcasting: 1
I0501 16:45:51.597492       7 log.go:172] (0xc002b95970) (0xc001a286e0) Stream removed, broadcasting: 3
I0501 16:45:51.597502       7 log.go:172] (0xc002b95970) (0xc001a28780) Stream removed, broadcasting: 5
May  1 16:45:51.597: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
May  1 16:45:51.597: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8735 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 16:45:51.597: INFO: >>> kubeConfig: /root/.kube/config
I0501 16:45:51.630563       7 log.go:172] (0xc002a1ee70) (0xc0016fb360) Create stream
I0501 16:45:51.630591       7 log.go:172] (0xc002a1ee70) (0xc0016fb360) Stream added, broadcasting: 1
I0501 16:45:51.633689       7 log.go:172] (0xc002a1ee70) Reply frame received for 1
I0501 16:45:51.633743       7 log.go:172] (0xc002a1ee70) (0xc0016fb4a0) Create stream
I0501 16:45:51.633766       7 log.go:172] (0xc002a1ee70) (0xc0016fb4a0) Stream added, broadcasting: 3
I0501 16:45:51.634865       7 log.go:172] (0xc002a1ee70) Reply frame received for 3
I0501 16:45:51.634908       7 log.go:172] (0xc002a1ee70) (0xc001875ea0) Create stream
I0501 16:45:51.634925       7 log.go:172] (0xc002a1ee70) (0xc001875ea0) Stream added, broadcasting: 5
I0501 16:45:51.636123       7 log.go:172] (0xc002a1ee70) Reply frame received for 5
I0501 16:45:51.698051       7 log.go:172] (0xc002a1ee70) Data frame received for 3
I0501 16:45:51.698098       7 log.go:172] (0xc0016fb4a0) (3) Data frame handling
I0501 16:45:51.698115       7 log.go:172] (0xc0016fb4a0) (3) Data frame sent
I0501 16:45:51.698129       7 log.go:172] (0xc002a1ee70) Data frame received for 3
I0501 16:45:51.698143       7 log.go:172] (0xc0016fb4a0) (3) Data frame handling
I0501 16:45:51.698223       7 log.go:172] (0xc002a1ee70) Data frame received for 5
I0501 16:45:51.698262       7 log.go:172] (0xc001875ea0) (5) Data frame handling
I0501 16:45:51.699760       7 log.go:172] (0xc002a1ee70) Data frame received for 1
I0501 16:45:51.699798       7 log.go:172] (0xc0016fb360) (1) Data frame handling
I0501 16:45:51.699838       7 log.go:172] (0xc0016fb360) (1) Data frame sent
I0501 16:45:51.699862       7 log.go:172] (0xc002a1ee70) (0xc0016fb360) Stream removed, broadcasting: 1
I0501 16:45:51.699981       7 log.go:172] (0xc002a1ee70) Go away received
I0501 16:45:51.700023       7 log.go:172] (0xc002a1ee70) (0xc0016fb360) Stream removed, broadcasting: 1
I0501 16:45:51.700050       7 log.go:172] (0xc002a1ee70) (0xc0016fb4a0) Stream removed, broadcasting: 3
I0501 16:45:51.700065       7 log.go:172] (0xc002a1ee70) (0xc001875ea0) Stream removed, broadcasting: 5
May  1 16:45:51.700: INFO: Exec stderr: ""
May  1 16:45:51.700: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8735 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 16:45:51.700: INFO: >>> kubeConfig: /root/.kube/config
I0501 16:45:51.730540       7 log.go:172] (0xc0052c2d10) (0xc001552320) Create stream
I0501 16:45:51.730577       7 log.go:172] (0xc0052c2d10) (0xc001552320) Stream added, broadcasting: 1
I0501 16:45:51.733518       7 log.go:172] (0xc0052c2d10) Reply frame received for 1
I0501 16:45:51.733558       7 log.go:172] (0xc0052c2d10) (0xc001a28aa0) Create stream
I0501 16:45:51.733572       7 log.go:172] (0xc0052c2d10) (0xc001a28aa0) Stream added, broadcasting: 3
I0501 16:45:51.734594       7 log.go:172] (0xc0052c2d10) Reply frame received for 3
I0501 16:45:51.734634       7 log.go:172] (0xc0052c2d10) (0xc001e26c80) Create stream
I0501 16:45:51.734650       7 log.go:172] (0xc0052c2d10) (0xc001e26c80) Stream added, broadcasting: 5
I0501 16:45:51.735594       7 log.go:172] (0xc0052c2d10) Reply frame received for 5
I0501 16:45:51.806413       7 log.go:172] (0xc0052c2d10) Data frame received for 5
I0501 16:45:51.806436       7 log.go:172] (0xc001e26c80) (5) Data frame handling
I0501 16:45:51.806502       7 log.go:172] (0xc0052c2d10) Data frame received for 3
I0501 16:45:51.806541       7 log.go:172] (0xc001a28aa0) (3) Data frame handling
I0501 16:45:51.806576       7 log.go:172] (0xc001a28aa0) (3) Data frame sent
I0501 16:45:51.806597       7 log.go:172] (0xc0052c2d10) Data frame received for 3
I0501 16:45:51.806616       7 log.go:172] (0xc001a28aa0) (3) Data frame handling
I0501 16:45:51.809011       7 log.go:172] (0xc0052c2d10) Data frame received for 1
I0501 16:45:51.809033       7 log.go:172] (0xc001552320) (1) Data frame handling
I0501 16:45:51.809069       7 log.go:172] (0xc001552320) (1) Data frame sent
I0501 16:45:51.809102       7 log.go:172] (0xc0052c2d10) (0xc001552320) Stream removed, broadcasting: 1
I0501 16:45:51.809294       7 log.go:172] (0xc0052c2d10) Go away received
I0501 16:45:51.809399       7 log.go:172] (0xc0052c2d10) (0xc001552320) Stream removed, broadcasting: 1
I0501 16:45:51.809428       7 log.go:172] (0xc0052c2d10) (0xc001a28aa0) Stream removed, broadcasting: 3
I0501 16:45:51.809442       7 log.go:172] (0xc0052c2d10) (0xc001e26c80) Stream removed, broadcasting: 5
May  1 16:45:51.809: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
May  1 16:45:51.809: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8735 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 16:45:51.809: INFO: >>> kubeConfig: /root/.kube/config
I0501 16:45:51.844374       7 log.go:172] (0xc0052c34a0) (0xc001552820) Create stream
I0501 16:45:51.844419       7 log.go:172] (0xc0052c34a0) (0xc001552820) Stream added, broadcasting: 1
I0501 16:45:51.847304       7 log.go:172] (0xc0052c34a0) Reply frame received for 1
I0501 16:45:51.847363       7 log.go:172] (0xc0052c34a0) (0xc0015528c0) Create stream
I0501 16:45:51.847412       7 log.go:172] (0xc0052c34a0) (0xc0015528c0) Stream added, broadcasting: 3
I0501 16:45:51.848418       7 log.go:172] (0xc0052c34a0) Reply frame received for 3
I0501 16:45:51.848440       7 log.go:172] (0xc0052c34a0) (0xc001552aa0) Create stream
I0501 16:45:51.848455       7 log.go:172] (0xc0052c34a0) (0xc001552aa0) Stream added, broadcasting: 5
I0501 16:45:51.849891       7 log.go:172] (0xc0052c34a0) Reply frame received for 5
I0501 16:45:51.914619       7 log.go:172] (0xc0052c34a0) Data frame received for 5
I0501 16:45:51.914673       7 log.go:172] (0xc001552aa0) (5) Data frame handling
I0501 16:45:51.914705       7 log.go:172] (0xc0052c34a0) Data frame received for 3
I0501 16:45:51.914719       7 log.go:172] (0xc0015528c0) (3) Data frame handling
I0501 16:45:51.914735       7 log.go:172] (0xc0015528c0) (3) Data frame sent
I0501 16:45:51.914749       7 log.go:172] (0xc0052c34a0) Data frame received for 3
I0501 16:45:51.914766       7 log.go:172] (0xc0015528c0) (3) Data frame handling
I0501 16:45:51.916306       7 log.go:172] (0xc0052c34a0) Data frame received for 1
I0501 16:45:51.916342       7 log.go:172] (0xc001552820) (1) Data frame handling
I0501 16:45:51.916387       7 log.go:172] (0xc001552820) (1) Data frame sent
I0501 16:45:51.916421       7 log.go:172] (0xc0052c34a0) (0xc001552820) Stream removed, broadcasting: 1
I0501 16:45:51.916450       7 log.go:172] (0xc0052c34a0) Go away received
I0501 16:45:51.916546       7 log.go:172] (0xc0052c34a0) (0xc001552820) Stream removed, broadcasting: 1
I0501 16:45:51.916577       7 log.go:172] (0xc0052c34a0) (0xc0015528c0) Stream removed, broadcasting: 3
I0501 16:45:51.916608       7 log.go:172] (0xc0052c34a0) (0xc001552aa0) Stream removed, broadcasting: 5
May  1 16:45:51.916: INFO: Exec stderr: ""
May  1 16:45:51.916: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8735 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 16:45:51.916: INFO: >>> kubeConfig: /root/.kube/config
I0501 16:45:51.947340       7 log.go:172] (0xc002a74370) (0xc000c46f00) Create stream
I0501 16:45:51.947392       7 log.go:172] (0xc002a74370) (0xc000c46f00) Stream added, broadcasting: 1
I0501 16:45:51.949758       7 log.go:172] (0xc002a74370) Reply frame received for 1
I0501 16:45:51.949786       7 log.go:172] (0xc002a74370) (0xc001e26e60) Create stream
I0501 16:45:51.949796       7 log.go:172] (0xc002a74370) (0xc001e26e60) Stream added, broadcasting: 3
I0501 16:45:51.950642       7 log.go:172] (0xc002a74370) Reply frame received for 3
I0501 16:45:51.950676       7 log.go:172] (0xc002a74370) (0xc001e26f00) Create stream
I0501 16:45:51.950689       7 log.go:172] (0xc002a74370) (0xc001e26f00) Stream added, broadcasting: 5
I0501 16:45:51.951381       7 log.go:172] (0xc002a74370) Reply frame received for 5
I0501 16:45:52.014710       7 log.go:172] (0xc002a74370) Data frame received for 5
I0501 16:45:52.014745       7 log.go:172] (0xc001e26f00) (5) Data frame handling
I0501 16:45:52.014766       7 log.go:172] (0xc002a74370) Data frame received for 3
I0501 16:45:52.014775       7 log.go:172] (0xc001e26e60) (3) Data frame handling
I0501 16:45:52.014790       7 log.go:172] (0xc001e26e60) (3) Data frame sent
I0501 16:45:52.014810       7 log.go:172] (0xc002a74370) Data frame received for 3
I0501 16:45:52.014826       7 log.go:172] (0xc001e26e60) (3) Data frame handling
I0501 16:45:52.016304       7 log.go:172] (0xc002a74370) Data frame received for 1
I0501 16:45:52.016324       7 log.go:172] (0xc000c46f00) (1) Data frame handling
I0501 16:45:52.016340       7 log.go:172] (0xc000c46f00) (1) Data frame sent
I0501 16:45:52.016353       7 log.go:172] (0xc002a74370) (0xc000c46f00) Stream removed, broadcasting: 1
I0501 16:45:52.016420       7 log.go:172] (0xc002a74370) (0xc000c46f00) Stream removed, broadcasting: 1
I0501 16:45:52.016444       7 log.go:172] (0xc002a74370) Go away received
I0501 16:45:52.016486       7 log.go:172] (0xc002a74370) (0xc001e26e60) Stream removed, broadcasting: 3
I0501 16:45:52.016520       7 log.go:172] (0xc002a74370) (0xc001e26f00) Stream removed, broadcasting: 5
May  1 16:45:52.016: INFO: Exec stderr: ""
May  1 16:45:52.016: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8735 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 16:45:52.016: INFO: >>> kubeConfig: /root/.kube/config
I0501 16:45:52.040126       7 log.go:172] (0xc004f12790) (0xc001e272c0) Create stream
I0501 16:45:52.040151       7 log.go:172] (0xc004f12790) (0xc001e272c0) Stream added, broadcasting: 1
I0501 16:45:52.042205       7 log.go:172] (0xc004f12790) Reply frame received for 1
I0501 16:45:52.042242       7 log.go:172] (0xc004f12790) (0xc001e27400) Create stream
I0501 16:45:52.042253       7 log.go:172] (0xc004f12790) (0xc001e27400) Stream added, broadcasting: 3
I0501 16:45:52.043090       7 log.go:172] (0xc004f12790) Reply frame received for 3
I0501 16:45:52.043123       7 log.go:172] (0xc004f12790) (0xc001a28be0) Create stream
I0501 16:45:52.043134       7 log.go:172] (0xc004f12790) (0xc001a28be0) Stream added, broadcasting: 5
I0501 16:45:52.043980       7 log.go:172] (0xc004f12790) Reply frame received for 5
I0501 16:45:52.097380       7 log.go:172] (0xc004f12790) Data frame received for 3
I0501 16:45:52.097414       7 log.go:172] (0xc001e27400) (3) Data frame handling
I0501 16:45:52.097436       7 log.go:172] (0xc001e27400) (3) Data frame sent
I0501 16:45:52.097445       7 log.go:172] (0xc004f12790) Data frame received for 3
I0501 16:45:52.097455       7 log.go:172] (0xc001e27400) (3) Data frame handling
I0501 16:45:52.097692       7 log.go:172] (0xc004f12790) Data frame received for 5
I0501 16:45:52.097709       7 log.go:172] (0xc001a28be0) (5) Data frame handling
I0501 16:45:52.099897       7 log.go:172] (0xc004f12790) Data frame received for 1
I0501 16:45:52.099919       7 log.go:172] (0xc001e272c0) (1) Data frame handling
I0501 16:45:52.099935       7 log.go:172] (0xc001e272c0) (1) Data frame sent
I0501 16:45:52.099947       7 log.go:172] (0xc004f12790) (0xc001e272c0) Stream removed, broadcasting: 1
I0501 16:45:52.100035       7 log.go:172] (0xc004f12790) (0xc001e272c0) Stream removed, broadcasting: 1
I0501 16:45:52.100045       7 log.go:172] (0xc004f12790) (0xc001e27400) Stream removed, broadcasting: 3
I0501 16:45:52.100172       7 log.go:172] (0xc004f12790) (0xc001a28be0) Stream removed, broadcasting: 5
May  1 16:45:52.100: INFO: Exec stderr: ""
I0501 16:45:52.100198       7 log.go:172] (0xc004f12790) Go away received
May  1 16:45:52.100: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8735 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 16:45:52.100: INFO: >>> kubeConfig: /root/.kube/config
I0501 16:45:52.131719       7 log.go:172] (0xc002a1f3f0) (0xc0016fb680) Create stream
I0501 16:45:52.131743       7 log.go:172] (0xc002a1f3f0) (0xc0016fb680) Stream added, broadcasting: 1
I0501 16:45:52.136448       7 log.go:172] (0xc002a1f3f0) Reply frame received for 1
I0501 16:45:52.136503       7 log.go:172] (0xc002a1f3f0) (0xc001552c80) Create stream
I0501 16:45:52.136529       7 log.go:172] (0xc002a1f3f0) (0xc001552c80) Stream added, broadcasting: 3
I0501 16:45:52.139186       7 log.go:172] (0xc002a1f3f0) Reply frame received for 3
I0501 16:45:52.139211       7 log.go:172] (0xc002a1f3f0) (0xc001e274a0) Create stream
I0501 16:45:52.139220       7 log.go:172] (0xc002a1f3f0) (0xc001e274a0) Stream added, broadcasting: 5
I0501 16:45:52.140059       7 log.go:172] (0xc002a1f3f0) Reply frame received for 5
I0501 16:45:52.199135       7 log.go:172] (0xc002a1f3f0) Data frame received for 5
I0501 16:45:52.199167       7 log.go:172] (0xc001e274a0) (5) Data frame handling
I0501 16:45:52.199186       7 log.go:172] (0xc002a1f3f0) Data frame received for 3
I0501 16:45:52.199193       7 log.go:172] (0xc001552c80) (3) Data frame handling
I0501 16:45:52.199213       7 log.go:172] (0xc001552c80) (3) Data frame sent
I0501 16:45:52.199221       7 log.go:172] (0xc002a1f3f0) Data frame received for 3
I0501 16:45:52.199226       7 log.go:172] (0xc001552c80) (3) Data frame handling
I0501 16:45:52.200887       7 log.go:172] (0xc002a1f3f0) Data frame received for 1
I0501 16:45:52.200916       7 log.go:172] (0xc0016fb680) (1) Data frame handling
I0501 16:45:52.200927       7 log.go:172] (0xc0016fb680) (1) Data frame sent
I0501 16:45:52.200941       7 log.go:172] (0xc002a1f3f0) (0xc0016fb680) Stream removed, broadcasting: 1
I0501 16:45:52.201040       7 log.go:172] (0xc002a1f3f0) (0xc0016fb680) Stream removed, broadcasting: 1
I0501 16:45:52.201064       7 log.go:172] (0xc002a1f3f0) (0xc001552c80) Stream removed, broadcasting: 3
I0501 16:45:52.201365       7 log.go:172] (0xc002a1f3f0) (0xc001e274a0) Stream removed, broadcasting: 5
May  1 16:45:52.201: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:45:52.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0501 16:45:52.201826       7 log.go:172] (0xc002a1f3f0) Go away received
STEP: Destroying namespace "e2e-kubelet-etc-hosts-8735" for this suite.

• [SLOW TEST:19.696 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4478,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:45:52.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-24ef6b02-9500-4545-a537-8bebf92833f5
STEP: Creating a pod to test consume secrets
May  1 16:45:52.337: INFO: Waiting up to 5m0s for pod "pod-secrets-8093034d-e44c-4da3-8b42-a11aabc062a9" in namespace "secrets-6372" to be "Succeeded or Failed"
May  1 16:45:52.495: INFO: Pod "pod-secrets-8093034d-e44c-4da3-8b42-a11aabc062a9": Phase="Pending", Reason="", readiness=false. Elapsed: 157.785411ms
May  1 16:45:54.499: INFO: Pod "pod-secrets-8093034d-e44c-4da3-8b42-a11aabc062a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161904799s
May  1 16:45:56.639: INFO: Pod "pod-secrets-8093034d-e44c-4da3-8b42-a11aabc062a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.301938263s
May  1 16:45:58.644: INFO: Pod "pod-secrets-8093034d-e44c-4da3-8b42-a11aabc062a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.306385299s
STEP: Saw pod success
May  1 16:45:58.644: INFO: Pod "pod-secrets-8093034d-e44c-4da3-8b42-a11aabc062a9" satisfied condition "Succeeded or Failed"
May  1 16:45:58.647: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-8093034d-e44c-4da3-8b42-a11aabc062a9 container secret-volume-test: 
STEP: delete the pod
May  1 16:45:58.683: INFO: Waiting for pod pod-secrets-8093034d-e44c-4da3-8b42-a11aabc062a9 to disappear
May  1 16:45:58.701: INFO: Pod pod-secrets-8093034d-e44c-4da3-8b42-a11aabc062a9 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:45:58.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6372" for this suite.

• [SLOW TEST:6.498 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4509,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:45:58.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  1 16:45:58.798: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24a791db-7655-4036-a000-adbd9b68213d" in namespace "projected-36" to be "Succeeded or Failed"
May  1 16:45:58.820: INFO: Pod "downwardapi-volume-24a791db-7655-4036-a000-adbd9b68213d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.622503ms
May  1 16:46:00.824: INFO: Pod "downwardapi-volume-24a791db-7655-4036-a000-adbd9b68213d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025639669s
May  1 16:46:02.828: INFO: Pod "downwardapi-volume-24a791db-7655-4036-a000-adbd9b68213d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030108845s
STEP: Saw pod success
May  1 16:46:02.828: INFO: Pod "downwardapi-volume-24a791db-7655-4036-a000-adbd9b68213d" satisfied condition "Succeeded or Failed"
May  1 16:46:02.831: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-24a791db-7655-4036-a000-adbd9b68213d container client-container: 
STEP: delete the pod
May  1 16:46:02.935: INFO: Waiting for pod downwardapi-volume-24a791db-7655-4036-a000-adbd9b68213d to disappear
May  1 16:46:02.938: INFO: Pod downwardapi-volume-24a791db-7655-4036-a000-adbd9b68213d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:46:02.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-36" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4513,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:46:03.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
May  1 16:46:03.467: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config api-versions'
May  1 16:46:04.008: INFO: stderr: ""
May  1 16:46:04.008: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:46:04.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2560" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":264,"skipped":4515,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:46:04.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-a9e10438-e9c1-4fc2-8c5a-a4ecf33ce43e
STEP: Creating a pod to test consume configMaps
May  1 16:46:04.188: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-51c222b5-6eba-4d58-93e0-4a0aec44e9e5" in namespace "projected-2907" to be "Succeeded or Failed"
May  1 16:46:04.221: INFO: Pod "pod-projected-configmaps-51c222b5-6eba-4d58-93e0-4a0aec44e9e5": Phase="Pending", Reason="", readiness=false. Elapsed: 33.344671ms
May  1 16:46:06.229: INFO: Pod "pod-projected-configmaps-51c222b5-6eba-4d58-93e0-4a0aec44e9e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041729257s
May  1 16:46:08.233: INFO: Pod "pod-projected-configmaps-51c222b5-6eba-4d58-93e0-4a0aec44e9e5": Phase="Running", Reason="", readiness=true. Elapsed: 4.045814352s
May  1 16:46:10.240: INFO: Pod "pod-projected-configmaps-51c222b5-6eba-4d58-93e0-4a0aec44e9e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051940036s
STEP: Saw pod success
May  1 16:46:10.240: INFO: Pod "pod-projected-configmaps-51c222b5-6eba-4d58-93e0-4a0aec44e9e5" satisfied condition "Succeeded or Failed"
May  1 16:46:10.243: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-51c222b5-6eba-4d58-93e0-4a0aec44e9e5 container projected-configmap-volume-test: 
STEP: delete the pod
May  1 16:46:10.676: INFO: Waiting for pod pod-projected-configmaps-51c222b5-6eba-4d58-93e0-4a0aec44e9e5 to disappear
May  1 16:46:10.731: INFO: Pod pod-projected-configmaps-51c222b5-6eba-4d58-93e0-4a0aec44e9e5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:46:10.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2907" for this suite.

• [SLOW TEST:6.622 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4562,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:46:10.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
May  1 16:46:11.018: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

May  1 16:46:11.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8100'
May  1 16:46:11.398: INFO: stderr: ""
May  1 16:46:11.398: INFO: stdout: "service/agnhost-slave created\n"
May  1 16:46:11.399: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

May  1 16:46:11.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8100'
May  1 16:46:11.772: INFO: stderr: ""
May  1 16:46:11.772: INFO: stdout: "service/agnhost-master created\n"
May  1 16:46:11.772: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

May  1 16:46:11.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8100'
May  1 16:46:12.088: INFO: stderr: ""
May  1 16:46:12.088: INFO: stdout: "service/frontend created\n"
May  1 16:46:12.088: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

May  1 16:46:12.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8100'
May  1 16:46:12.377: INFO: stderr: ""
May  1 16:46:12.377: INFO: stdout: "deployment.apps/frontend created\n"
May  1 16:46:12.377: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

May  1 16:46:12.377: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8100'
May  1 16:46:12.774: INFO: stderr: ""
May  1 16:46:12.774: INFO: stdout: "deployment.apps/agnhost-master created\n"
May  1 16:46:12.774: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

May  1 16:46:12.774: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8100'
May  1 16:46:13.194: INFO: stderr: ""
May  1 16:46:13.194: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
May  1 16:46:13.194: INFO: Waiting for all frontend pods to be Running.
May  1 16:46:28.245: INFO: Waiting for frontend to serve content.
May  1 16:46:28.256: INFO: Trying to add a new entry to the guestbook.
May  1 16:46:28.267: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
May  1 16:46:28.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8100'
May  1 16:46:28.686: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  1 16:46:28.686: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
May  1 16:46:28.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8100'
May  1 16:46:28.888: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  1 16:46:28.888: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
May  1 16:46:28.888: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8100'
May  1 16:46:29.473: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  1 16:46:29.473: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
May  1 16:46:29.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8100'
May  1 16:46:29.722: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  1 16:46:29.722: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
May  1 16:46:29.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8100'
May  1 16:46:30.263: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  1 16:46:30.263: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
May  1 16:46:30.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8100'
May  1 16:46:30.770: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  1 16:46:30.770: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:46:30.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8100" for this suite.

• [SLOW TEST:20.109 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":266,"skipped":4563,"failed":0}
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:46:30.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  1 16:46:31.738: INFO: (0) /api/v1/nodes/kali-worker2/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-6653
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6653 to expose endpoints map[]
May  1 16:46:34.779: INFO: successfully validated that service multi-endpoint-test in namespace services-6653 exposes endpoints map[] (371.598273ms elapsed)
STEP: Creating pod pod1 in namespace services-6653
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6653 to expose endpoints map[pod1:[100]]
May  1 16:46:40.510: INFO: successfully validated that service multi-endpoint-test in namespace services-6653 exposes endpoints map[pod1:[100]] (5.17632172s elapsed)
STEP: Creating pod pod2 in namespace services-6653
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6653 to expose endpoints map[pod1:[100] pod2:[101]]
May  1 16:46:44.697: INFO: successfully validated that service multi-endpoint-test in namespace services-6653 exposes endpoints map[pod1:[100] pod2:[101]] (4.181591537s elapsed)
STEP: Deleting pod pod1 in namespace services-6653
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6653 to expose endpoints map[pod2:[101]]
May  1 16:46:45.792: INFO: successfully validated that service multi-endpoint-test in namespace services-6653 exposes endpoints map[pod2:[101]] (1.090049176s elapsed)
STEP: Deleting pod pod2 in namespace services-6653
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6653 to expose endpoints map[]
May  1 16:46:46.832: INFO: successfully validated that service multi-endpoint-test in namespace services-6653 exposes endpoints map[] (1.034693474s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:46:46.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6653" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:13.999 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":268,"skipped":4580,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:46:46.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-4911
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  1 16:46:47.187: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  1 16:46:47.278: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  1 16:46:49.421: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  1 16:46:51.316: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  1 16:46:53.283: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 16:46:55.282: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 16:46:57.283: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 16:46:59.809: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 16:47:01.282: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  1 16:47:03.283: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  1 16:47:03.288: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  1 16:47:05.293: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  1 16:47:07.299: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  1 16:47:11.336: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.113:8080/dial?request=hostname&protocol=http&host=10.244.2.112&port=8080&tries=1'] Namespace:pod-network-test-4911 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 16:47:11.336: INFO: >>> kubeConfig: /root/.kube/config
I0501 16:47:11.365305       7 log.go:172] (0xc002a74840) (0xc001a48820) Create stream
I0501 16:47:11.365359       7 log.go:172] (0xc002a74840) (0xc001a48820) Stream added, broadcasting: 1
I0501 16:47:11.367018       7 log.go:172] (0xc002a74840) Reply frame received for 1
I0501 16:47:11.367072       7 log.go:172] (0xc002a74840) (0xc001e260a0) Create stream
I0501 16:47:11.367087       7 log.go:172] (0xc002a74840) (0xc001e260a0) Stream added, broadcasting: 3
I0501 16:47:11.367857       7 log.go:172] (0xc002a74840) Reply frame received for 3
I0501 16:47:11.367888       7 log.go:172] (0xc002a74840) (0xc001a488c0) Create stream
I0501 16:47:11.367900       7 log.go:172] (0xc002a74840) (0xc001a488c0) Stream added, broadcasting: 5
I0501 16:47:11.368897       7 log.go:172] (0xc002a74840) Reply frame received for 5
I0501 16:47:11.458791       7 log.go:172] (0xc002a74840) Data frame received for 3
I0501 16:47:11.458839       7 log.go:172] (0xc001e260a0) (3) Data frame handling
I0501 16:47:11.458872       7 log.go:172] (0xc001e260a0) (3) Data frame sent
I0501 16:47:11.458966       7 log.go:172] (0xc002a74840) Data frame received for 5
I0501 16:47:11.459022       7 log.go:172] (0xc001a488c0) (5) Data frame handling
I0501 16:47:11.459184       7 log.go:172] (0xc002a74840) Data frame received for 3
I0501 16:47:11.459197       7 log.go:172] (0xc001e260a0) (3) Data frame handling
I0501 16:47:11.460927       7 log.go:172] (0xc002a74840) Data frame received for 1
I0501 16:47:11.460949       7 log.go:172] (0xc001a48820) (1) Data frame handling
I0501 16:47:11.461003       7 log.go:172] (0xc001a48820) (1) Data frame sent
I0501 16:47:11.461022       7 log.go:172] (0xc002a74840) (0xc001a48820) Stream removed, broadcasting: 1
I0501 16:47:11.461364       7 log.go:172] (0xc002a74840) (0xc001a48820) Stream removed, broadcasting: 1
I0501 16:47:11.461398       7 log.go:172] (0xc002a74840) (0xc001e260a0) Stream removed, broadcasting: 3
I0501 16:47:11.461639       7 log.go:172] (0xc002a74840) Go away received
I0501 16:47:11.461785       7 log.go:172] (0xc002a74840) (0xc001a488c0) Stream removed, broadcasting: 5
May  1 16:47:11.461: INFO: Waiting for responses: map[]
May  1 16:47:11.465: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.113:8080/dial?request=hostname&protocol=http&host=10.244.1.84&port=8080&tries=1'] Namespace:pod-network-test-4911 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  1 16:47:11.466: INFO: >>> kubeConfig: /root/.kube/config
I0501 16:47:11.499083       7 log.go:172] (0xc002a74f20) (0xc001a48d20) Create stream
I0501 16:47:11.499113       7 log.go:172] (0xc002a74f20) (0xc001a48d20) Stream added, broadcasting: 1
I0501 16:47:11.501039       7 log.go:172] (0xc002a74f20) Reply frame received for 1
I0501 16:47:11.501075       7 log.go:172] (0xc002a74f20) (0xc001e26460) Create stream
I0501 16:47:11.501088       7 log.go:172] (0xc002a74f20) (0xc001e26460) Stream added, broadcasting: 3
I0501 16:47:11.502359       7 log.go:172] (0xc002a74f20) Reply frame received for 3
I0501 16:47:11.502414       7 log.go:172] (0xc002a74f20) (0xc001874e60) Create stream
I0501 16:47:11.502430       7 log.go:172] (0xc002a74f20) (0xc001874e60) Stream added, broadcasting: 5
I0501 16:47:11.503673       7 log.go:172] (0xc002a74f20) Reply frame received for 5
I0501 16:47:11.573572       7 log.go:172] (0xc002a74f20) Data frame received for 3
I0501 16:47:11.573624       7 log.go:172] (0xc001e26460) (3) Data frame handling
I0501 16:47:11.573658       7 log.go:172] (0xc001e26460) (3) Data frame sent
I0501 16:47:11.574709       7 log.go:172] (0xc002a74f20) Data frame received for 5
I0501 16:47:11.574753       7 log.go:172] (0xc001874e60) (5) Data frame handling
I0501 16:47:11.574976       7 log.go:172] (0xc002a74f20) Data frame received for 3
I0501 16:47:11.575008       7 log.go:172] (0xc001e26460) (3) Data frame handling
I0501 16:47:11.576533       7 log.go:172] (0xc002a74f20) Data frame received for 1
I0501 16:47:11.576563       7 log.go:172] (0xc001a48d20) (1) Data frame handling
I0501 16:47:11.576578       7 log.go:172] (0xc001a48d20) (1) Data frame sent
I0501 16:47:11.576701       7 log.go:172] (0xc002a74f20) (0xc001a48d20) Stream removed, broadcasting: 1
I0501 16:47:11.576751       7 log.go:172] (0xc002a74f20) Go away received
I0501 16:47:11.576877       7 log.go:172] (0xc002a74f20) (0xc001a48d20) Stream removed, broadcasting: 1
I0501 16:47:11.576902       7 log.go:172] (0xc002a74f20) (0xc001e26460) Stream removed, broadcasting: 3
I0501 16:47:11.576915       7 log.go:172] (0xc002a74f20) (0xc001874e60) Stream removed, broadcasting: 5
May  1 16:47:11.576: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:47:11.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4911" for this suite.

• [SLOW TEST:24.675 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4588,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:47:11.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
May  1 16:47:11.742: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:47:11.789: INFO: Number of nodes with available pods: 0
May  1 16:47:11.789: INFO: Node kali-worker is running more than one daemon pod
May  1 16:47:12.826: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:47:12.830: INFO: Number of nodes with available pods: 0
May  1 16:47:12.830: INFO: Node kali-worker is running more than one daemon pod
May  1 16:47:13.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:47:13.799: INFO: Number of nodes with available pods: 0
May  1 16:47:13.799: INFO: Node kali-worker is running more than one daemon pod
May  1 16:47:14.886: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:47:14.889: INFO: Number of nodes with available pods: 0
May  1 16:47:14.889: INFO: Node kali-worker is running more than one daemon pod
May  1 16:47:15.795: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:47:15.800: INFO: Number of nodes with available pods: 1
May  1 16:47:15.800: INFO: Node kali-worker is running more than one daemon pod
May  1 16:47:16.797: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:47:16.802: INFO: Number of nodes with available pods: 2
May  1 16:47:16.802: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
May  1 16:47:16.982: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:47:17.012: INFO: Number of nodes with available pods: 1
May  1 16:47:17.012: INFO: Node kali-worker is running more than one daemon pod
May  1 16:47:18.089: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:47:18.092: INFO: Number of nodes with available pods: 1
May  1 16:47:18.092: INFO: Node kali-worker is running more than one daemon pod
May  1 16:47:19.018: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:47:19.329: INFO: Number of nodes with available pods: 1
May  1 16:47:19.329: INFO: Node kali-worker is running more than one daemon pod
May  1 16:47:20.066: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:47:20.070: INFO: Number of nodes with available pods: 1
May  1 16:47:20.070: INFO: Node kali-worker is running more than one daemon pod
May  1 16:47:21.018: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:47:21.022: INFO: Number of nodes with available pods: 1
May  1 16:47:21.022: INFO: Node kali-worker is running more than one daemon pod
May  1 16:47:22.018: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  1 16:47:22.022: INFO: Number of nodes with available pods: 2
May  1 16:47:22.022: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2153, will wait for the garbage collector to delete the pods
May  1 16:47:22.090: INFO: Deleting DaemonSet.extensions daemon-set took: 8.920856ms
May  1 16:47:22.390: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.270331ms
May  1 16:47:34.094: INFO: Number of nodes with available pods: 0
May  1 16:47:34.094: INFO: Number of running nodes: 0, number of available pods: 0
May  1 16:47:34.097: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2153/daemonsets","resourceVersion":"680023"},"items":null}

May  1 16:47:34.100: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2153/pods","resourceVersion":"680023"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:47:34.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2153" for this suite.

• [SLOW TEST:22.736 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":270,"skipped":4606,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:47:34.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May  1 16:47:34.715: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:47:44.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9708" for this suite.

• [SLOW TEST:9.744 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":271,"skipped":4646,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:47:44.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-0952ddf7-7f1a-48a1-b2ee-9066ef813db5 in namespace container-probe-2173
May  1 16:47:49.736: INFO: Started pod liveness-0952ddf7-7f1a-48a1-b2ee-9066ef813db5 in namespace container-probe-2173
STEP: checking the pod's current state and verifying that restartCount is present
May  1 16:47:49.870: INFO: Initial restart count of pod liveness-0952ddf7-7f1a-48a1-b2ee-9066ef813db5 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:51:51.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2173" for this suite.

• [SLOW TEST:248.216 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":272,"skipped":4657,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:51:52.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May  1 16:51:59.424: INFO: Successfully updated pod "pod-update-564e2073-8537-4ff2-bd53-ab7902af4368"
STEP: verifying the updated pod is in kubernetes
May  1 16:51:59.470: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:51:59.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6535" for this suite.

• [SLOW TEST:7.193 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4685,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:51:59.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-8998
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating statefulset ss in namespace statefulset-8998
May  1 16:51:59.618: INFO: Found 0 stateful pods, waiting for 1
May  1 16:52:09.622: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May  1 16:52:09.703: INFO: Deleting all statefulset in ns statefulset-8998
May  1 16:52:09.715: INFO: Scaling statefulset ss to 0
May  1 16:52:29.818: INFO: Waiting for statefulset status.replicas updated to 0
May  1 16:52:29.821: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:52:29.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8998" for this suite.

• [SLOW TEST:30.366 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":274,"skipped":4698,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  1 16:52:29.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  1 16:52:29.897: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  1 16:52:30.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5309" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":275,"completed":275,"skipped":4704,"failed":0}
SSSSSSSSSSSSSMay  1 16:52:31.032: INFO: Running AfterSuite actions on all nodes
May  1 16:52:31.032: INFO: Running AfterSuite actions on node 1
May  1 16:52:31.032: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 6034.079 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS