I0506 17:34:58.051875 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0506 17:34:58.052092 7 e2e.go:124] Starting e2e run "65484ced-1315-4076-be56-5961526c5d06" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588786496 - Will randomize all specs Will run 275 of 4992 specs May 6 17:34:58.111: INFO: >>> kubeConfig: /root/.kube/config May 6 17:34:58.114: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 6 17:34:58.136: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 6 17:34:58.174: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 6 17:34:58.174: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 6 17:34:58.174: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 6 17:34:58.183: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 6 17:34:58.183: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 6 17:34:58.183: INFO: e2e test version: v1.18.2 May 6 17:34:58.184: INFO: kube-apiserver version: v1.18.2 May 6 17:34:58.185: INFO: >>> kubeConfig: /root/.kube/config May 6 17:34:58.188: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:34:58.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe May 6 17:34:58.335: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 6 17:34:58.462: INFO: The status of Pod test-webserver-7b2b8fa5-bdd4-49c5-b896-a4141489b29b is Pending, waiting for it to be Running (with Ready = true) May 6 17:35:00.495: INFO: The status of Pod test-webserver-7b2b8fa5-bdd4-49c5-b896-a4141489b29b is Pending, waiting for it to be Running (with Ready = true) May 6 17:35:02.465: INFO: The status of Pod test-webserver-7b2b8fa5-bdd4-49c5-b896-a4141489b29b is Running (Ready = false) May 6 17:35:04.466: INFO: The status of Pod test-webserver-7b2b8fa5-bdd4-49c5-b896-a4141489b29b is Running (Ready = false) May 6 17:35:06.467: INFO: The status of Pod test-webserver-7b2b8fa5-bdd4-49c5-b896-a4141489b29b is Running (Ready = false) May 6 17:35:08.466: INFO: The status of Pod test-webserver-7b2b8fa5-bdd4-49c5-b896-a4141489b29b is Running (Ready = false) May 6 17:35:10.466: INFO: The status of Pod test-webserver-7b2b8fa5-bdd4-49c5-b896-a4141489b29b is Running (Ready = false) May 6 17:35:12.467: INFO: The status of Pod test-webserver-7b2b8fa5-bdd4-49c5-b896-a4141489b29b is Running (Ready = false) May 6 17:35:14.467: INFO: The status of Pod test-webserver-7b2b8fa5-bdd4-49c5-b896-a4141489b29b is Running (Ready = false) May 6 17:35:16.469: INFO: The status of Pod test-webserver-7b2b8fa5-bdd4-49c5-b896-a4141489b29b is Running (Ready = false) May 6 17:35:18.467: INFO: The status of Pod test-webserver-7b2b8fa5-bdd4-49c5-b896-a4141489b29b is Running (Ready = false) May 6 17:35:20.467: INFO: The status of Pod test-webserver-7b2b8fa5-bdd4-49c5-b896-a4141489b29b is Running (Ready = false) May 6 17:35:22.467: INFO: The status of Pod test-webserver-7b2b8fa5-bdd4-49c5-b896-a4141489b29b is Running (Ready = false) May 6 17:35:24.465: INFO: The status of Pod test-webserver-7b2b8fa5-bdd4-49c5-b896-a4141489b29b is Running (Ready = true) May 6 17:35:24.467: INFO: Container started at 2020-05-06 17:35:01 +0000 UTC, pod became ready at 2020-05-06 17:35:22 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:35:24.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7828" for this suite. • [SLOW TEST:26.284 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":14,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:35:24.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 6 17:35:24.533: INFO: Waiting up to 5m0s for pod "busybox-user-65534-0a652ed6-b565-471d-8cea-fac83ebd4e30" in namespace "security-context-test-4975" to be "Succeeded or Failed" May 6 17:35:24.546: INFO: Pod "busybox-user-65534-0a652ed6-b565-471d-8cea-fac83ebd4e30": Phase="Pending", Reason="", readiness=false. Elapsed: 13.572548ms May 6 17:35:26.550: INFO: Pod "busybox-user-65534-0a652ed6-b565-471d-8cea-fac83ebd4e30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017378177s May 6 17:35:28.554: INFO: Pod "busybox-user-65534-0a652ed6-b565-471d-8cea-fac83ebd4e30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021570227s May 6 17:35:28.554: INFO: Pod "busybox-user-65534-0a652ed6-b565-471d-8cea-fac83ebd4e30" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:35:28.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4975" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":2,"skipped":25,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:35:28.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 17:35:29.035: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 17:35:31.047: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724383329, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724383329, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724383329, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724383329, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 17:35:34.078: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 6 17:35:34.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9712-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:35:35.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-154" for this suite. STEP: Destroying namespace "webhook-154-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.204 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":3,"skipped":27,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:35:35.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 6 17:35:35.910: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f3ebe900-05d0-41a4-bbe6-1ce394dbeee5", Controller:(*bool)(0xc001e6c852), BlockOwnerDeletion:(*bool)(0xc001e6c853)}} May 6 17:35:35.973: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a4136f84-a763-4e76-9218-bb32bf89602b", Controller:(*bool)(0xc0028fe532), BlockOwnerDeletion:(*bool)(0xc0028fe533)}} May 6 17:35:36.012: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a83008d9-e6ec-4dae-ad64-e5a1a23fa6ff", Controller:(*bool)(0xc001e6ca1a), BlockOwnerDeletion:(*bool)(0xc001e6ca1b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:35:41.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6180" for this suite. • [SLOW TEST:5.468 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":4,"skipped":33,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:35:41.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:35:48.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3050" for this suite. • [SLOW TEST:7.510 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":5,"skipped":41,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:35:48.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod May 6 17:35:53.742: INFO: Successfully updated pod "labelsupdate32eb0780-b843-48d7-9a4a-56a0548d4e4f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:35:57.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6097" for this suite. • [SLOW TEST:9.181 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":55,"failed":0} S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:35:57.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments May 6 17:35:58.063: INFO: Waiting up to 5m0s for pod "client-containers-0f41b8c2-4ccd-45f5-8c38-a0b2e51deffd" in namespace "containers-8" to be "Succeeded or Failed" May 6 17:35:58.072: INFO: Pod "client-containers-0f41b8c2-4ccd-45f5-8c38-a0b2e51deffd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.073986ms May 6 17:36:00.266: INFO: Pod "client-containers-0f41b8c2-4ccd-45f5-8c38-a0b2e51deffd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202801216s May 6 17:36:02.271: INFO: Pod "client-containers-0f41b8c2-4ccd-45f5-8c38-a0b2e51deffd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.208336779s STEP: Saw pod success May 6 17:36:02.271: INFO: Pod "client-containers-0f41b8c2-4ccd-45f5-8c38-a0b2e51deffd" satisfied condition "Succeeded or Failed" May 6 17:36:02.274: INFO: Trying to get logs from node kali-worker2 pod client-containers-0f41b8c2-4ccd-45f5-8c38-a0b2e51deffd container test-container: STEP: delete the pod May 6 17:36:02.315: INFO: Waiting for pod client-containers-0f41b8c2-4ccd-45f5-8c38-a0b2e51deffd to disappear May 6 17:36:02.323: INFO: Pod client-containers-0f41b8c2-4ccd-45f5-8c38-a0b2e51deffd no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:36:02.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":56,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:36:02.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 17:36:03.003: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 17:36:05.015: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724383363, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724383363, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724383363, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724383362, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 17:36:08.053: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:36:08.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4784" for this suite. STEP: Destroying namespace "webhook-4784-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.820 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":8,"skipped":105,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:36:08.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 6 17:36:14.879: INFO: Successfully updated pod "adopt-release-q2bz5" STEP: Checking that the Job readopts the Pod May 6 17:36:14.879: INFO: Waiting up to 15m0s for pod "adopt-release-q2bz5" in namespace "job-8479" to be "adopted" May 6 17:36:14.908: INFO: Pod "adopt-release-q2bz5": Phase="Running", Reason="", readiness=true. Elapsed: 29.105927ms May 6 17:36:16.912: INFO: Pod "adopt-release-q2bz5": Phase="Running", Reason="", readiness=true. Elapsed: 2.032911617s May 6 17:36:16.912: INFO: Pod "adopt-release-q2bz5" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 6 17:36:17.422: INFO: Successfully updated pod "adopt-release-q2bz5" STEP: Checking that the Job releases the Pod May 6 17:36:17.422: INFO: Waiting up to 15m0s for pod "adopt-release-q2bz5" in namespace "job-8479" to be "released" May 6 17:36:17.442: INFO: Pod "adopt-release-q2bz5": Phase="Running", Reason="", readiness=true. Elapsed: 20.376946ms May 6 17:36:19.446: INFO: Pod "adopt-release-q2bz5": Phase="Running", Reason="", readiness=true. Elapsed: 2.023548101s May 6 17:36:19.446: INFO: Pod "adopt-release-q2bz5" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:36:19.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8479" for this suite. • [SLOW TEST:11.301 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":9,"skipped":122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:36:19.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-12a40b7c-1108-4c02-b33c-3a07289fd702 STEP: Creating a pod to test consume secrets May 6 17:36:20.368: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7ae43576-bc38-4db2-b71d-83a3f6b43a08" in namespace "projected-7147" to be "Succeeded or Failed" May 6 17:36:20.384: INFO: Pod "pod-projected-secrets-7ae43576-bc38-4db2-b71d-83a3f6b43a08": Phase="Pending", Reason="", readiness=false. Elapsed: 15.764521ms May 6 17:36:22.581: INFO: Pod "pod-projected-secrets-7ae43576-bc38-4db2-b71d-83a3f6b43a08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213350588s May 6 17:36:24.595: INFO: Pod "pod-projected-secrets-7ae43576-bc38-4db2-b71d-83a3f6b43a08": Phase="Running", Reason="", readiness=true. Elapsed: 4.226890515s May 6 17:36:26.599: INFO: Pod "pod-projected-secrets-7ae43576-bc38-4db2-b71d-83a3f6b43a08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.231223163s STEP: Saw pod success May 6 17:36:26.599: INFO: Pod "pod-projected-secrets-7ae43576-bc38-4db2-b71d-83a3f6b43a08" satisfied condition "Succeeded or Failed" May 6 17:36:26.601: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-7ae43576-bc38-4db2-b71d-83a3f6b43a08 container projected-secret-volume-test: STEP: delete the pod May 6 17:36:26.711: INFO: Waiting for pod pod-projected-secrets-7ae43576-bc38-4db2-b71d-83a3f6b43a08 to disappear May 6 17:36:26.719: INFO: Pod pod-projected-secrets-7ae43576-bc38-4db2-b71d-83a3f6b43a08 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:36:26.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7147" for this suite. • [SLOW TEST:7.290 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:36:26.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:36:42.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3038" for this suite. • [SLOW TEST:16.262 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":11,"skipped":189,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:36:43.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy May 6 17:36:43.473: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix049972978/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:36:43.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7433" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":12,"skipped":222,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:36:43.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 May 6 17:36:43.752: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 17:36:43.762: INFO: Waiting for terminating namespaces to be deleted... May 6 17:36:43.764: INFO: Logging pods the kubelet thinks is on node kali-worker before test May 6 17:36:43.780: INFO: adopt-release-5ltwr from job-8479 started at 2020-05-06 17:36:17 +0000 UTC (1 container statuses recorded) May 6 17:36:43.780: INFO: Container c ready: true, restart count 0 May 6 17:36:43.780: INFO: adopt-release-q2bz5 from job-8479 started at 2020-05-06 17:36:08 +0000 UTC (1 container statuses recorded) May 6 17:36:43.780: INFO: Container c ready: true, restart count 0 May 6 17:36:43.780: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 6 17:36:43.780: INFO: Container kindnet-cni ready: true, restart count 1 May 6 17:36:43.780: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 6 17:36:43.780: INFO: Container kube-proxy ready: true, restart count 0 May 6 17:36:43.780: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test May 6 17:36:43.784: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 6 17:36:43.784: INFO: Container kube-proxy ready: true, restart count 0 May 6 17:36:43.784: INFO: adopt-release-r8gkp from job-8479 started at 2020-05-06 17:36:08 +0000 UTC (1 container statuses recorded) May 6 17:36:43.784: INFO: Container c ready: true, restart count 0 May 6 17:36:43.784: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 6 17:36:43.784: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160c80ff998b2003], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:36:44.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7348" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":13,"skipped":226,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:36:44.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 6 17:36:44.960: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8090dcb5-e4ed-4950-846f-8907a104efbd" in namespace "downward-api-2407" to be "Succeeded or Failed" May 6 17:36:45.045: INFO: Pod "downwardapi-volume-8090dcb5-e4ed-4950-846f-8907a104efbd": Phase="Pending", Reason="", readiness=false. Elapsed: 85.122492ms May 6 17:36:47.049: INFO: Pod "downwardapi-volume-8090dcb5-e4ed-4950-846f-8907a104efbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089388055s May 6 17:36:49.104: INFO: Pod "downwardapi-volume-8090dcb5-e4ed-4950-846f-8907a104efbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.144421905s STEP: Saw pod success May 6 17:36:49.104: INFO: Pod "downwardapi-volume-8090dcb5-e4ed-4950-846f-8907a104efbd" satisfied condition "Succeeded or Failed" May 6 17:36:49.107: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-8090dcb5-e4ed-4950-846f-8907a104efbd container client-container: STEP: delete the pod May 6 17:36:49.361: INFO: Waiting for pod downwardapi-volume-8090dcb5-e4ed-4950-846f-8907a104efbd to disappear May 6 17:36:49.481: INFO: Pod downwardapi-volume-8090dcb5-e4ed-4950-846f-8907a104efbd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:36:49.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2407" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":268,"failed":0} S ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:36:49.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:36:53.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3800" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":269,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:36:53.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-862.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-862.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-862.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-862.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-862.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-862.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 17:37:11.239: INFO: DNS probes using dns-862/dns-test-eadab45e-4233-4e93-a003-4b520337dc91 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:37:11.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-862" for this suite. • [SLOW TEST:17.460 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":16,"skipped":297,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:37:11.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info May 6 17:37:11.370: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config cluster-info' May 6 17:37:19.170: INFO: stderr: "" May 6 17:37:19.170: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:37:19.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3035" for this suite. • [SLOW TEST:7.856 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:952 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":17,"skipped":298,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:37:19.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-18460a89-f088-4042-98d0-c12a9a66b68c STEP: Creating a pod to test consume configMaps May 6 17:37:19.790: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4cda7b6f-7194-4ff3-a5d1-d1bf639c8dbc" in namespace "projected-7024" to be "Succeeded or Failed" May 6 17:37:19.845: INFO: Pod "pod-projected-configmaps-4cda7b6f-7194-4ff3-a5d1-d1bf639c8dbc": Phase="Pending", Reason="", readiness=false. Elapsed: 55.788933ms May 6 17:37:21.850: INFO: Pod "pod-projected-configmaps-4cda7b6f-7194-4ff3-a5d1-d1bf639c8dbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060256523s May 6 17:37:23.874: INFO: Pod "pod-projected-configmaps-4cda7b6f-7194-4ff3-a5d1-d1bf639c8dbc": Phase="Running", Reason="", readiness=true. Elapsed: 4.084776264s May 6 17:37:26.063: INFO: Pod "pod-projected-configmaps-4cda7b6f-7194-4ff3-a5d1-d1bf639c8dbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.273442372s STEP: Saw pod success May 6 17:37:26.063: INFO: Pod "pod-projected-configmaps-4cda7b6f-7194-4ff3-a5d1-d1bf639c8dbc" satisfied condition "Succeeded or Failed" May 6 17:37:26.105: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-4cda7b6f-7194-4ff3-a5d1-d1bf639c8dbc container projected-configmap-volume-test: STEP: delete the pod May 6 17:37:26.219: INFO: Waiting for pod pod-projected-configmaps-4cda7b6f-7194-4ff3-a5d1-d1bf639c8dbc to disappear May 6 17:37:26.236: INFO: Pod pod-projected-configmaps-4cda7b6f-7194-4ff3-a5d1-d1bf639c8dbc no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:37:26.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7024" for this suite. • [SLOW TEST:7.066 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":312,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:37:26.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 6 17:37:26.394: INFO: Waiting up to 5m0s for pod "downward-api-b614d76c-6443-4a8c-83fd-e76cc4b38b46" in namespace "downward-api-6593" to be "Succeeded or Failed" May 6 17:37:26.404: INFO: Pod "downward-api-b614d76c-6443-4a8c-83fd-e76cc4b38b46": Phase="Pending", Reason="", readiness=false. Elapsed: 9.45164ms May 6 17:37:28.407: INFO: Pod "downward-api-b614d76c-6443-4a8c-83fd-e76cc4b38b46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013153746s May 6 17:37:30.506: INFO: Pod "downward-api-b614d76c-6443-4a8c-83fd-e76cc4b38b46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112081373s May 6 17:37:32.519: INFO: Pod "downward-api-b614d76c-6443-4a8c-83fd-e76cc4b38b46": Phase="Running", Reason="", readiness=true. Elapsed: 6.124651281s May 6 17:37:34.602: INFO: Pod "downward-api-b614d76c-6443-4a8c-83fd-e76cc4b38b46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.207936202s STEP: Saw pod success May 6 17:37:34.602: INFO: Pod "downward-api-b614d76c-6443-4a8c-83fd-e76cc4b38b46" satisfied condition "Succeeded or Failed" May 6 17:37:34.605: INFO: Trying to get logs from node kali-worker2 pod downward-api-b614d76c-6443-4a8c-83fd-e76cc4b38b46 container dapi-container: STEP: delete the pod May 6 17:37:34.817: INFO: Waiting for pod downward-api-b614d76c-6443-4a8c-83fd-e76cc4b38b46 to disappear May 6 17:37:34.841: INFO: Pod downward-api-b614d76c-6443-4a8c-83fd-e76cc4b38b46 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:37:34.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6593" for this suite. • [SLOW TEST:8.683 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":328,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:37:34.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-7151bfe6-1332-4f76-9930-7839ddeb5952 STEP: Creating a pod to test consume secrets May 6 17:37:35.263: INFO: Waiting up to 5m0s for pod "pod-secrets-441e45b9-efbb-4e13-953e-ff4cb831e0b7" in namespace "secrets-2156" to be "Succeeded or Failed" May 6 17:37:35.434: INFO: Pod "pod-secrets-441e45b9-efbb-4e13-953e-ff4cb831e0b7": Phase="Pending", Reason="", readiness=false. Elapsed: 171.043579ms May 6 17:37:37.691: INFO: Pod "pod-secrets-441e45b9-efbb-4e13-953e-ff4cb831e0b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.428286019s May 6 17:37:39.695: INFO: Pod "pod-secrets-441e45b9-efbb-4e13-953e-ff4cb831e0b7": Phase="Running", Reason="", readiness=true. Elapsed: 4.431446205s May 6 17:37:41.699: INFO: Pod "pod-secrets-441e45b9-efbb-4e13-953e-ff4cb831e0b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.435303415s STEP: Saw pod success May 6 17:37:41.699: INFO: Pod "pod-secrets-441e45b9-efbb-4e13-953e-ff4cb831e0b7" satisfied condition "Succeeded or Failed" May 6 17:37:41.701: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-441e45b9-efbb-4e13-953e-ff4cb831e0b7 container secret-volume-test: STEP: delete the pod May 6 17:37:41.913: INFO: Waiting for pod pod-secrets-441e45b9-efbb-4e13-953e-ff4cb831e0b7 to disappear May 6 17:37:41.925: INFO: Pod pod-secrets-441e45b9-efbb-4e13-953e-ff4cb831e0b7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:37:41.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2156" for this suite. • [SLOW TEST:7.024 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":353,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:37:41.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0506 17:37:53.983060 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 17:37:53.983: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:37:53.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8118" for this suite. • [SLOW TEST:12.039 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":21,"skipped":392,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:37:53.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 6 17:37:54.674: INFO: Pod name pod-release: Found 0 pods out of 1 May 6 17:37:59.734: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:37:59.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8451" for this suite. • [SLOW TEST:5.948 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":22,"skipped":403,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:37:59.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 6 17:38:00.180: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 6 17:38:11.922: INFO: >>> kubeConfig: /root/.kube/config May 6 17:38:13.890: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:38:26.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6072" for this suite. • [SLOW TEST:26.636 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":23,"skipped":408,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:38:26.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:38:40.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2015" for this suite. • [SLOW TEST:14.170 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":24,"skipped":430,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:38:40.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6545.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6545.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6545.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6545.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6545.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6545.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 17:38:58.318: INFO: DNS probes using dns-6545/dns-test-65f656da-61c8-487b-9b79-3147df5f46f3 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:38:59.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6545" for this suite. • [SLOW TEST:18.746 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":25,"skipped":454,"failed":0} SSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:38:59.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:39:00.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-331" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":26,"skipped":460,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:39:00.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command May 6 17:39:00.929: INFO: Waiting up to 5m0s for pod "var-expansion-b68b7138-3776-4eae-97f8-b34e850dd486" in namespace "var-expansion-3890" to be "Succeeded or Failed" May 6 17:39:00.946: INFO: Pod "var-expansion-b68b7138-3776-4eae-97f8-b34e850dd486": Phase="Pending", Reason="", readiness=false. Elapsed: 17.02337ms May 6 17:39:02.951: INFO: Pod "var-expansion-b68b7138-3776-4eae-97f8-b34e850dd486": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021542868s May 6 17:39:04.953: INFO: Pod "var-expansion-b68b7138-3776-4eae-97f8-b34e850dd486": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023793427s May 6 17:39:06.957: INFO: Pod "var-expansion-b68b7138-3776-4eae-97f8-b34e850dd486": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027855546s STEP: Saw pod success May 6 17:39:06.957: INFO: Pod "var-expansion-b68b7138-3776-4eae-97f8-b34e850dd486" satisfied condition "Succeeded or Failed" May 6 17:39:06.959: INFO: Trying to get logs from node kali-worker pod var-expansion-b68b7138-3776-4eae-97f8-b34e850dd486 container dapi-container: STEP: delete the pod May 6 17:39:07.044: INFO: Waiting for pod var-expansion-b68b7138-3776-4eae-97f8-b34e850dd486 to disappear May 6 17:39:07.054: INFO: Pod var-expansion-b68b7138-3776-4eae-97f8-b34e850dd486 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:39:07.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3890" for this suite. • [SLOW TEST:6.669 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":488,"failed":0} SSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:39:07.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 6 17:39:07.104: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 6 17:39:07.114: INFO: Pod name sample-pod: Found 0 pods out of 1 May 6 17:39:12.208: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 6 17:39:12.208: INFO: Creating deployment "test-rolling-update-deployment" May 6 17:39:12.240: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 6 17:39:12.275: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 6 17:39:14.560: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 6 17:39:14.969: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724383552, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724383552, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724383552, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724383552, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 17:39:17.124: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 May 6 17:39:17.256: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6509 /apis/apps/v1/namespaces/deployment-6509/deployments/test-rolling-update-deployment d7449351-1e97-4518-827a-61be5b440d3a 2048521 1 2020-05-06 17:39:12 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-06 17:39:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-06 17:39:16 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004f62558 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-06 17:39:12 +0000 UTC,LastTransitionTime:2020-05-06 17:39:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-05-06 17:39:16 +0000 UTC,LastTransitionTime:2020-05-06 17:39:12 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 6 17:39:17.287: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7 deployment-6509 /apis/apps/v1/namespaces/deployment-6509/replicasets/test-rolling-update-deployment-59d5cb45c7 0268afe6-7d23-4518-a4d0-ec852a046b5e 2048509 1 2020-05-06 17:39:12 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment d7449351-1e97-4518-827a-61be5b440d3a 0xc004f62bc7 0xc004f62bc8}] [] [{kube-controller-manager Update apps/v1 2020-05-06 17:39:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 55 52 52 57 51 53 49 45 49 101 57 55 45 52 53 49 56 45 56 50 55 97 45 54 49 98 101 53 98 52 52 48 100 51 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004f62c68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 6 17:39:17.287: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 6 17:39:17.287: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6509 /apis/apps/v1/namespaces/deployment-6509/replicasets/test-rolling-update-controller 3f450793-ceab-4d3e-b5ab-fbf0129ccbc0 2048520 2 2020-05-06 17:39:07 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment d7449351-1e97-4518-827a-61be5b440d3a 0xc004f62aa7 0xc004f62aa8}] [] [{e2e.test Update apps/v1 2020-05-06 17:39:07 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-06 17:39:16 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 55 52 52 57 51 53 49 45 49 101 57 55 45 52 53 49 56 45 56 50 55 97 45 54 49 98 101 53 98 52 52 48 100 51 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004f62b58 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 17:39:17.316: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-sbv8s" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-sbv8s test-rolling-update-deployment-59d5cb45c7- deployment-6509 /api/v1/namespaces/deployment-6509/pods/test-rolling-update-deployment-59d5cb45c7-sbv8s 0661b070-33dc-4c92-a198-105597b42b9d 2048508 0 2020-05-06 17:39:12 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 0268afe6-7d23-4518-a4d0-ec852a046b5e 0xc004f631c7 0xc004f631c8}] [] [{kube-controller-manager Update v1 2020-05-06 17:39:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 50 54 56 97 102 101 54 45 55 100 50 51 45 52 53 49 56 45 97 52 100 48 45 101 99 56 53 50 97 48 52 54 98 53 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 17:39:16 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 52 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xp2ps,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xp2ps,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xp2ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 17:39:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 17:39:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 17:39:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 17:39:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.248,StartTime:2020-05-06 17:39:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 17:39:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://506219efb51e392999f8b7a022cd1f276f885b5362f31afb9959532151c2146b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.248,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:39:17.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6509" for this suite. • [SLOW TEST:10.288 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":28,"skipped":492,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:39:17.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller May 6 17:39:17.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6437' May 6 17:39:17.969: INFO: stderr: "" May 6 17:39:17.969: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 17:39:17.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6437' May 6 17:39:18.500: INFO: stderr: "" May 6 17:39:18.500: INFO: stdout: "update-demo-nautilus-hjvlr update-demo-nautilus-m6qzc " May 6 17:39:18.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hjvlr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6437' May 6 17:39:18.594: INFO: stderr: "" May 6 17:39:18.594: INFO: stdout: "" May 6 17:39:18.594: INFO: update-demo-nautilus-hjvlr is created but not running May 6 17:39:23.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6437' May 6 17:39:23.844: INFO: stderr: "" May 6 17:39:23.844: INFO: stdout: "update-demo-nautilus-hjvlr update-demo-nautilus-m6qzc " May 6 17:39:23.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hjvlr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6437' May 6 17:39:24.236: INFO: stderr: "" May 6 17:39:24.236: INFO: stdout: "" May 6 17:39:24.236: INFO: update-demo-nautilus-hjvlr is created but not running May 6 17:39:29.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6437' May 6 17:39:29.587: INFO: stderr: "" May 6 17:39:29.587: INFO: stdout: "update-demo-nautilus-hjvlr update-demo-nautilus-m6qzc " May 6 17:39:29.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hjvlr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6437' May 6 17:39:29.976: INFO: stderr: "" May 6 17:39:29.976: INFO: stdout: "true" May 6 17:39:29.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hjvlr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6437' May 6 17:39:30.178: INFO: stderr: "" May 6 17:39:30.178: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 17:39:30.178: INFO: validating pod update-demo-nautilus-hjvlr May 6 17:39:30.414: INFO: got data: { "image": "nautilus.jpg" } May 6 17:39:30.414: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 17:39:30.414: INFO: update-demo-nautilus-hjvlr is verified up and running May 6 17:39:30.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m6qzc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6437' May 6 17:39:30.732: INFO: stderr: "" May 6 17:39:30.732: INFO: stdout: "true" May 6 17:39:30.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m6qzc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6437' May 6 17:39:30.825: INFO: stderr: "" May 6 17:39:30.825: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 17:39:30.825: INFO: validating pod update-demo-nautilus-m6qzc May 6 17:39:31.049: INFO: got data: { "image": "nautilus.jpg" } May 6 17:39:31.050: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 17:39:31.050: INFO: update-demo-nautilus-m6qzc is verified up and running STEP: scaling down the replication controller May 6 17:39:31.052: INFO: scanned /root for discovery docs: May 6 17:39:31.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6437' May 6 17:39:32.428: INFO: stderr: "" May 6 17:39:32.428: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 17:39:32.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6437' May 6 17:39:32.958: INFO: stderr: "" May 6 17:39:32.958: INFO: stdout: "update-demo-nautilus-hjvlr update-demo-nautilus-m6qzc " STEP: Replicas for name=update-demo: expected=1 actual=2 May 6 17:39:37.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6437' May 6 17:39:38.057: INFO: stderr: "" May 6 17:39:38.057: INFO: stdout: "update-demo-nautilus-hjvlr update-demo-nautilus-m6qzc " STEP: Replicas for name=update-demo: expected=1 actual=2 May 6 17:39:43.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6437' May 6 17:39:43.166: INFO: stderr: "" May 6 17:39:43.166: INFO: stdout: "update-demo-nautilus-hjvlr update-demo-nautilus-m6qzc " STEP: Replicas for name=update-demo: expected=1 actual=2 May 6 17:39:48.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6437' May 6 17:39:48.273: INFO: stderr: "" May 6 17:39:48.273: INFO: stdout: "update-demo-nautilus-m6qzc " May 6 17:39:48.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m6qzc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6437' May 6 17:39:48.373: INFO: stderr: "" May 6 17:39:48.373: INFO: stdout: "true" May 6 17:39:48.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m6qzc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6437' May 6 17:39:48.468: INFO: stderr: "" May 6 17:39:48.469: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 17:39:48.469: INFO: validating pod update-demo-nautilus-m6qzc May 6 17:39:48.472: INFO: got data: { "image": "nautilus.jpg" } May 6 17:39:48.472: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 17:39:48.472: INFO: update-demo-nautilus-m6qzc is verified up and running STEP: scaling up the replication controller May 6 17:39:48.473: INFO: scanned /root for discovery docs: May 6 17:39:48.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6437' May 6 17:39:49.853: INFO: stderr: "" May 6 17:39:49.853: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 17:39:49.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6437' May 6 17:39:49.962: INFO: stderr: "" May 6 17:39:49.963: INFO: stdout: "update-demo-nautilus-bnq6t update-demo-nautilus-m6qzc " May 6 17:39:49.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bnq6t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6437' May 6 17:39:50.162: INFO: stderr: "" May 6 17:39:50.162: INFO: stdout: "" May 6 17:39:50.162: INFO: update-demo-nautilus-bnq6t is created but not running May 6 17:39:55.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6437' May 6 17:39:55.286: INFO: stderr: "" May 6 17:39:55.287: INFO: stdout: "update-demo-nautilus-bnq6t update-demo-nautilus-m6qzc " May 6 17:39:55.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bnq6t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6437' May 6 17:39:55.388: INFO: stderr: "" May 6 17:39:55.388: INFO: stdout: "true" May 6 17:39:55.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bnq6t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6437' May 6 17:39:55.491: INFO: stderr: "" May 6 17:39:55.491: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 17:39:55.491: INFO: validating pod update-demo-nautilus-bnq6t May 6 17:39:55.495: INFO: got data: { "image": "nautilus.jpg" } May 6 17:39:55.495: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 17:39:55.495: INFO: update-demo-nautilus-bnq6t is verified up and running May 6 17:39:55.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m6qzc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6437' May 6 17:39:55.589: INFO: stderr: "" May 6 17:39:55.589: INFO: stdout: "true" May 6 17:39:55.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m6qzc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6437' May 6 17:39:55.672: INFO: stderr: "" May 6 17:39:55.672: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 17:39:55.672: INFO: validating pod update-demo-nautilus-m6qzc May 6 17:39:55.700: INFO: got data: { "image": "nautilus.jpg" } May 6 17:39:55.701: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 17:39:55.701: INFO: update-demo-nautilus-m6qzc is verified up and running STEP: using delete to clean up resources May 6 17:39:55.701: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6437' May 6 17:39:55.800: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 17:39:55.800: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 6 17:39:55.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6437' May 6 17:39:55.911: INFO: stderr: "No resources found in kubectl-6437 namespace.\n" May 6 17:39:55.911: INFO: stdout: "" May 6 17:39:55.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6437 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 17:39:56.007: INFO: stderr: "" May 6 17:39:56.007: INFO: stdout: "update-demo-nautilus-bnq6t\nupdate-demo-nautilus-m6qzc\n" May 6 17:39:56.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6437' May 6 17:39:56.600: INFO: stderr: "No resources found in kubectl-6437 namespace.\n" May 6 17:39:56.600: INFO: stdout: "" May 6 17:39:56.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6437 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 17:39:56.688: INFO: stderr: "" May 6 17:39:56.688: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:39:56.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6437" for this suite. • [SLOW TEST:39.344 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":29,"skipped":497,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:39:56.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-e14cd580-c67d-4425-a6ee-4508dde68e37 STEP: Creating a pod to test consume secrets May 6 17:39:57.822: INFO: Waiting up to 5m0s for pod "pod-secrets-c4d940cd-ea59-49ca-843f-3d13dbdc699f" in namespace "secrets-536" to be "Succeeded or Failed" May 6 17:39:57.876: INFO: Pod "pod-secrets-c4d940cd-ea59-49ca-843f-3d13dbdc699f": Phase="Pending", Reason="", readiness=false. Elapsed: 53.780297ms May 6 17:39:59.920: INFO: Pod "pod-secrets-c4d940cd-ea59-49ca-843f-3d13dbdc699f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097986192s May 6 17:40:01.999: INFO: Pod "pod-secrets-c4d940cd-ea59-49ca-843f-3d13dbdc699f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176554377s May 6 17:40:04.119: INFO: Pod "pod-secrets-c4d940cd-ea59-49ca-843f-3d13dbdc699f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.296591185s STEP: Saw pod success May 6 17:40:04.119: INFO: Pod "pod-secrets-c4d940cd-ea59-49ca-843f-3d13dbdc699f" satisfied condition "Succeeded or Failed" May 6 17:40:04.170: INFO: Trying to get logs from node kali-worker pod pod-secrets-c4d940cd-ea59-49ca-843f-3d13dbdc699f container secret-volume-test: STEP: delete the pod May 6 17:40:04.606: INFO: Waiting for pod pod-secrets-c4d940cd-ea59-49ca-843f-3d13dbdc699f to disappear May 6 17:40:04.625: INFO: Pod pod-secrets-c4d940cd-ea59-49ca-843f-3d13dbdc699f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:40:04.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-536" for this suite. STEP: Destroying namespace "secret-namespace-6091" for this suite. • [SLOW TEST:8.321 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":529,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:40:05.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 6 17:40:11.936: INFO: Successfully updated pod "pod-update-2931abba-dd57-40aa-bc20-8dc65172fa9e" STEP: verifying the updated pod is in kubernetes May 6 17:40:11.989: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:40:11.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9753" for this suite. • [SLOW TEST:6.991 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":546,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:40:12.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4769 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-4769 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4769 May 6 17:40:12.252: INFO: Found 0 stateful pods, waiting for 1 May 6 17:40:22.257: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 6 17:40:22.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 17:40:22.501: INFO: stderr: "I0506 17:40:22.384917 682 log.go:172] (0xc000b3b290) (0xc000b083c0) Create stream\nI0506 17:40:22.384979 682 log.go:172] (0xc000b3b290) (0xc000b083c0) Stream added, broadcasting: 1\nI0506 17:40:22.388114 682 log.go:172] (0xc000b3b290) Reply frame received for 1\nI0506 17:40:22.388145 682 log.go:172] (0xc000b3b290) (0xc000b08460) Create stream\nI0506 17:40:22.388153 682 log.go:172] (0xc000b3b290) (0xc000b08460) Stream added, broadcasting: 3\nI0506 17:40:22.389257 682 log.go:172] (0xc000b3b290) Reply frame received for 3\nI0506 17:40:22.389427 682 log.go:172] (0xc000b3b290) (0xc00099e000) Create stream\nI0506 17:40:22.389449 682 log.go:172] (0xc000b3b290) (0xc00099e000) Stream added, broadcasting: 5\nI0506 17:40:22.390350 682 log.go:172] (0xc000b3b290) Reply frame received for 5\nI0506 17:40:22.453744 682 log.go:172] (0xc000b3b290) Data frame received for 5\nI0506 17:40:22.453760 682 log.go:172] (0xc00099e000) (5) Data frame handling\nI0506 17:40:22.453769 682 log.go:172] (0xc00099e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 17:40:22.495221 682 log.go:172] (0xc000b3b290) Data frame received for 3\nI0506 17:40:22.495244 682 log.go:172] (0xc000b08460) (3) Data frame handling\nI0506 17:40:22.495263 682 log.go:172] (0xc000b08460) (3) Data frame sent\nI0506 17:40:22.495269 682 log.go:172] (0xc000b3b290) Data frame received for 3\nI0506 17:40:22.495273 682 log.go:172] (0xc000b08460) (3) Data frame handling\nI0506 17:40:22.495396 682 log.go:172] (0xc000b3b290) Data frame received for 5\nI0506 17:40:22.495431 682 log.go:172] (0xc00099e000) (5) Data frame handling\nI0506 17:40:22.497072 682 log.go:172] (0xc000b3b290) Data frame received for 1\nI0506 17:40:22.497083 682 log.go:172] (0xc000b083c0) (1) Data frame handling\nI0506 17:40:22.497093 682 log.go:172] (0xc000b083c0) (1) Data frame sent\nI0506 17:40:22.497105 682 log.go:172] (0xc000b3b290) (0xc000b083c0) Stream removed, broadcasting: 1\nI0506 17:40:22.497263 682 log.go:172] (0xc000b3b290) Go away received\nI0506 17:40:22.497470 682 log.go:172] (0xc000b3b290) (0xc000b083c0) Stream removed, broadcasting: 1\nI0506 17:40:22.497484 682 log.go:172] (0xc000b3b290) (0xc000b08460) Stream removed, broadcasting: 3\nI0506 17:40:22.497492 682 log.go:172] (0xc000b3b290) (0xc00099e000) Stream removed, broadcasting: 5\n" May 6 17:40:22.502: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 17:40:22.502: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 17:40:22.514: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 6 17:40:32.519: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 17:40:32.519: INFO: Waiting for statefulset status.replicas updated to 0 May 6 17:40:32.596: INFO: POD NODE PHASE GRACE CONDITIONS May 6 17:40:32.596: INFO: ss-0 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:12 +0000 UTC }] May 6 17:40:32.596: INFO: May 6 17:40:32.596: INFO: StatefulSet ss has not reached scale 3, at 1 May 6 17:40:33.600: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995499575s May 6 17:40:34.658: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991002337s May 6 17:40:35.662: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.933348016s May 6 17:40:36.665: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.928990098s May 6 17:40:38.024: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.925887339s May 6 17:40:39.586: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.567437897s May 6 17:40:41.033: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.005052674s May 6 17:40:42.557: INFO: Verifying statefulset ss doesn't scale past 3 for another 557.915565ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4769 May 6 17:40:43.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:40:44.702: INFO: stderr: "I0506 17:40:44.637069 701 log.go:172] (0xc0009b0000) (0xc0009ac000) Create stream\nI0506 17:40:44.637259 701 log.go:172] (0xc0009b0000) (0xc0009ac000) Stream added, broadcasting: 1\nI0506 17:40:44.639211 701 log.go:172] (0xc0009b0000) Reply frame received for 1\nI0506 17:40:44.639258 701 log.go:172] (0xc0009b0000) (0xc0007a9220) Create stream\nI0506 17:40:44.639271 701 log.go:172] (0xc0009b0000) (0xc0007a9220) Stream added, broadcasting: 3\nI0506 17:40:44.640027 701 log.go:172] (0xc0009b0000) Reply frame received for 3\nI0506 17:40:44.640060 701 log.go:172] (0xc0009b0000) (0xc0009ac0a0) Create stream\nI0506 17:40:44.640071 701 log.go:172] (0xc0009b0000) (0xc0009ac0a0) Stream added, broadcasting: 5\nI0506 17:40:44.640692 701 log.go:172] (0xc0009b0000) Reply frame received for 5\nI0506 17:40:44.696781 701 log.go:172] (0xc0009b0000) Data frame received for 5\nI0506 17:40:44.696812 701 log.go:172] (0xc0009ac0a0) (5) Data frame handling\nI0506 17:40:44.696822 701 log.go:172] (0xc0009ac0a0) (5) Data frame sent\nI0506 17:40:44.696829 701 log.go:172] (0xc0009b0000) Data frame received for 5\nI0506 17:40:44.696835 701 log.go:172] (0xc0009ac0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 17:40:44.696865 701 log.go:172] (0xc0009b0000) Data frame received for 3\nI0506 17:40:44.696894 701 log.go:172] (0xc0007a9220) (3) Data frame handling\nI0506 17:40:44.696912 701 log.go:172] (0xc0007a9220) (3) Data frame sent\nI0506 17:40:44.696920 701 log.go:172] (0xc0009b0000) Data frame received for 3\nI0506 17:40:44.696927 701 log.go:172] (0xc0007a9220) (3) Data frame handling\nI0506 17:40:44.698339 701 log.go:172] (0xc0009b0000) Data frame received for 1\nI0506 17:40:44.698352 701 log.go:172] (0xc0009ac000) (1) Data frame handling\nI0506 17:40:44.698360 701 log.go:172] (0xc0009ac000) (1) Data frame sent\nI0506 17:40:44.698369 701 log.go:172] (0xc0009b0000) (0xc0009ac000) Stream removed, broadcasting: 1\nI0506 17:40:44.698455 701 log.go:172] (0xc0009b0000) Go away received\nI0506 17:40:44.698638 701 log.go:172] (0xc0009b0000) (0xc0009ac000) Stream removed, broadcasting: 1\nI0506 17:40:44.698651 701 log.go:172] (0xc0009b0000) (0xc0007a9220) Stream removed, broadcasting: 3\nI0506 17:40:44.698659 701 log.go:172] (0xc0009b0000) (0xc0009ac0a0) Stream removed, broadcasting: 5\n" May 6 17:40:44.702: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 17:40:44.702: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 17:40:44.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:40:46.372: INFO: stderr: "I0506 17:40:46.315801 721 log.go:172] (0xc000aeafd0) (0xc000ae23c0) Create stream\nI0506 17:40:46.315849 721 log.go:172] (0xc000aeafd0) (0xc000ae23c0) Stream added, broadcasting: 1\nI0506 17:40:46.317867 721 log.go:172] (0xc000aeafd0) Reply frame received for 1\nI0506 17:40:46.317900 721 log.go:172] (0xc000aeafd0) (0xc000a9c0a0) Create stream\nI0506 17:40:46.317916 721 log.go:172] (0xc000aeafd0) (0xc000a9c0a0) Stream added, broadcasting: 3\nI0506 17:40:46.318710 721 log.go:172] (0xc000aeafd0) Reply frame received for 3\nI0506 17:40:46.318733 721 log.go:172] (0xc000aeafd0) (0xc000ae2460) Create stream\nI0506 17:40:46.318754 721 log.go:172] (0xc000aeafd0) (0xc000ae2460) Stream added, broadcasting: 5\nI0506 17:40:46.320410 721 log.go:172] (0xc000aeafd0) Reply frame received for 5\nI0506 17:40:46.367680 721 log.go:172] (0xc000aeafd0) Data frame received for 3\nI0506 17:40:46.367718 721 log.go:172] (0xc000a9c0a0) (3) Data frame handling\nI0506 17:40:46.367729 721 log.go:172] (0xc000a9c0a0) (3) Data frame sent\nI0506 17:40:46.367745 721 log.go:172] (0xc000aeafd0) Data frame received for 3\nI0506 17:40:46.367752 721 log.go:172] (0xc000a9c0a0) (3) Data frame handling\nI0506 17:40:46.367797 721 log.go:172] (0xc000aeafd0) Data frame received for 5\nI0506 17:40:46.367811 721 log.go:172] (0xc000ae2460) (5) Data frame handling\nI0506 17:40:46.367825 721 log.go:172] (0xc000ae2460) (5) Data frame sent\nI0506 17:40:46.367837 721 log.go:172] (0xc000aeafd0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0506 17:40:46.367851 721 log.go:172] (0xc000ae2460) (5) Data frame handling\nI0506 17:40:46.369285 721 log.go:172] (0xc000aeafd0) Data frame received for 1\nI0506 17:40:46.369335 721 log.go:172] (0xc000ae23c0) (1) Data frame handling\nI0506 17:40:46.369353 721 log.go:172] (0xc000ae23c0) (1) Data frame sent\nI0506 17:40:46.369365 721 log.go:172] (0xc000aeafd0) (0xc000ae23c0) Stream removed, broadcasting: 1\nI0506 17:40:46.369378 721 log.go:172] (0xc000aeafd0) Go away received\nI0506 17:40:46.369682 721 log.go:172] (0xc000aeafd0) (0xc000ae23c0) Stream removed, broadcasting: 1\nI0506 17:40:46.369701 721 log.go:172] (0xc000aeafd0) (0xc000a9c0a0) Stream removed, broadcasting: 3\nI0506 17:40:46.369708 721 log.go:172] (0xc000aeafd0) (0xc000ae2460) Stream removed, broadcasting: 5\n" May 6 17:40:46.372: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 17:40:46.373: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 17:40:46.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:40:47.687: INFO: stderr: "I0506 17:40:47.608673 737 log.go:172] (0xc000b531e0) (0xc000b325a0) Create stream\nI0506 17:40:47.608733 737 log.go:172] (0xc000b531e0) (0xc000b325a0) Stream added, broadcasting: 1\nI0506 17:40:47.612246 737 log.go:172] (0xc000b531e0) Reply frame received for 1\nI0506 17:40:47.612290 737 log.go:172] (0xc000b531e0) (0xc00066b7c0) Create stream\nI0506 17:40:47.612310 737 log.go:172] (0xc000b531e0) (0xc00066b7c0) Stream added, broadcasting: 3\nI0506 17:40:47.613449 737 log.go:172] (0xc000b531e0) Reply frame received for 3\nI0506 17:40:47.613498 737 log.go:172] (0xc000b531e0) (0xc000afe140) Create stream\nI0506 17:40:47.613517 737 log.go:172] (0xc000b531e0) (0xc000afe140) Stream added, broadcasting: 5\nI0506 17:40:47.614369 737 log.go:172] (0xc000b531e0) Reply frame received for 5\nI0506 17:40:47.681888 737 log.go:172] (0xc000b531e0) Data frame received for 5\nI0506 17:40:47.682029 737 log.go:172] (0xc000b531e0) Data frame received for 3\nI0506 17:40:47.682063 737 log.go:172] (0xc00066b7c0) (3) Data frame handling\nI0506 17:40:47.682093 737 log.go:172] (0xc00066b7c0) (3) Data frame sent\nI0506 17:40:47.682103 737 log.go:172] (0xc000b531e0) Data frame received for 3\nI0506 17:40:47.682108 737 log.go:172] (0xc00066b7c0) (3) Data frame handling\nI0506 17:40:47.682134 737 log.go:172] (0xc000afe140) (5) Data frame handling\nI0506 17:40:47.682149 737 log.go:172] (0xc000afe140) (5) Data frame sent\nI0506 17:40:47.682157 737 log.go:172] (0xc000b531e0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0506 17:40:47.682162 737 log.go:172] (0xc000afe140) (5) Data frame handling\nI0506 17:40:47.683433 737 log.go:172] (0xc000b531e0) Data frame received for 1\nI0506 17:40:47.683443 737 log.go:172] (0xc000b325a0) (1) Data frame handling\nI0506 17:40:47.683449 737 log.go:172] (0xc000b325a0) (1) Data frame sent\nI0506 17:40:47.683457 737 log.go:172] (0xc000b531e0) (0xc000b325a0) Stream removed, broadcasting: 1\nI0506 17:40:47.683499 737 log.go:172] (0xc000b531e0) Go away received\nI0506 17:40:47.683696 737 log.go:172] (0xc000b531e0) (0xc000b325a0) Stream removed, broadcasting: 1\nI0506 17:40:47.683711 737 log.go:172] (0xc000b531e0) (0xc00066b7c0) Stream removed, broadcasting: 3\nI0506 17:40:47.683718 737 log.go:172] (0xc000b531e0) (0xc000afe140) Stream removed, broadcasting: 5\n" May 6 17:40:47.687: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 17:40:47.687: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 17:40:47.814: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 6 17:40:47.814: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 6 17:40:47.814: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 6 17:40:47.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 17:40:47.994: INFO: stderr: "I0506 17:40:47.931875 756 log.go:172] (0xc00003a4d0) (0xc0007f2000) Create stream\nI0506 17:40:47.931943 756 log.go:172] (0xc00003a4d0) (0xc0007f2000) Stream added, broadcasting: 1\nI0506 17:40:47.933905 756 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0506 17:40:47.933942 756 log.go:172] (0xc00003a4d0) (0xc0006b79a0) Create stream\nI0506 17:40:47.933952 756 log.go:172] (0xc00003a4d0) (0xc0006b79a0) Stream added, broadcasting: 3\nI0506 17:40:47.934508 756 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0506 17:40:47.934528 756 log.go:172] (0xc00003a4d0) (0xc0007f20a0) Create stream\nI0506 17:40:47.934534 756 log.go:172] (0xc00003a4d0) (0xc0007f20a0) Stream added, broadcasting: 5\nI0506 17:40:47.935177 756 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0506 17:40:47.988995 756 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0506 17:40:47.989031 756 log.go:172] (0xc0006b79a0) (3) Data frame handling\nI0506 17:40:47.989044 756 log.go:172] (0xc0006b79a0) (3) Data frame sent\nI0506 17:40:47.989053 756 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0506 17:40:47.989061 756 log.go:172] (0xc0006b79a0) (3) Data frame handling\nI0506 17:40:47.989092 756 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0506 17:40:47.989101 756 log.go:172] (0xc0007f20a0) (5) Data frame handling\nI0506 17:40:47.989269 756 log.go:172] (0xc0007f20a0) (5) Data frame sent\nI0506 17:40:47.989288 756 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0506 17:40:47.989302 756 log.go:172] (0xc0007f20a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 17:40:47.990099 756 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0506 17:40:47.990124 756 log.go:172] (0xc0007f2000) (1) Data frame handling\nI0506 17:40:47.990134 756 log.go:172] (0xc0007f2000) (1) Data frame sent\nI0506 17:40:47.990220 756 log.go:172] (0xc00003a4d0) (0xc0007f2000) Stream removed, broadcasting: 1\nI0506 17:40:47.990236 756 log.go:172] (0xc00003a4d0) Go away received\nI0506 17:40:47.990524 756 log.go:172] (0xc00003a4d0) (0xc0007f2000) Stream removed, broadcasting: 1\nI0506 17:40:47.990537 756 log.go:172] (0xc00003a4d0) (0xc0006b79a0) Stream removed, broadcasting: 3\nI0506 17:40:47.990542 756 log.go:172] (0xc00003a4d0) (0xc0007f20a0) Stream removed, broadcasting: 5\n" May 6 17:40:47.994: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 17:40:47.994: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 17:40:47.995: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 17:40:48.272: INFO: stderr: "I0506 17:40:48.122595 776 log.go:172] (0xc000a50000) (0xc000910000) Create stream\nI0506 17:40:48.122684 776 log.go:172] (0xc000a50000) (0xc000910000) Stream added, broadcasting: 1\nI0506 17:40:48.125728 776 log.go:172] (0xc000a50000) Reply frame received for 1\nI0506 17:40:48.125790 776 log.go:172] (0xc000a50000) (0xc000a82000) Create stream\nI0506 17:40:48.125813 776 log.go:172] (0xc000a50000) (0xc000a82000) Stream added, broadcasting: 3\nI0506 17:40:48.126787 776 log.go:172] (0xc000a50000) Reply frame received for 3\nI0506 17:40:48.126825 776 log.go:172] (0xc000a50000) (0xc000a820a0) Create stream\nI0506 17:40:48.126836 776 log.go:172] (0xc000a50000) (0xc000a820a0) Stream added, broadcasting: 5\nI0506 17:40:48.127638 776 log.go:172] (0xc000a50000) Reply frame received for 5\nI0506 17:40:48.183130 776 log.go:172] (0xc000a50000) Data frame received for 5\nI0506 17:40:48.183152 776 log.go:172] (0xc000a820a0) (5) Data frame handling\nI0506 17:40:48.183167 776 log.go:172] (0xc000a820a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 17:40:48.264023 776 log.go:172] (0xc000a50000) Data frame received for 3\nI0506 17:40:48.264208 776 log.go:172] (0xc000a82000) (3) Data frame handling\nI0506 17:40:48.264328 776 log.go:172] (0xc000a82000) (3) Data frame sent\nI0506 17:40:48.264400 776 log.go:172] (0xc000a50000) Data frame received for 3\nI0506 17:40:48.264419 776 log.go:172] (0xc000a82000) (3) Data frame handling\nI0506 17:40:48.264451 776 log.go:172] (0xc000a50000) Data frame received for 5\nI0506 17:40:48.264477 776 log.go:172] (0xc000a820a0) (5) Data frame handling\nI0506 17:40:48.266383 776 log.go:172] (0xc000a50000) Data frame received for 1\nI0506 17:40:48.266451 776 log.go:172] (0xc000910000) (1) Data frame handling\nI0506 17:40:48.266473 776 log.go:172] (0xc000910000) (1) Data frame sent\nI0506 17:40:48.266490 776 log.go:172] (0xc000a50000) (0xc000910000) Stream removed, broadcasting: 1\nI0506 17:40:48.266534 776 log.go:172] (0xc000a50000) Go away received\nI0506 17:40:48.266901 776 log.go:172] (0xc000a50000) (0xc000910000) Stream removed, broadcasting: 1\nI0506 17:40:48.266928 776 log.go:172] (0xc000a50000) (0xc000a82000) Stream removed, broadcasting: 3\nI0506 17:40:48.266940 776 log.go:172] (0xc000a50000) (0xc000a820a0) Stream removed, broadcasting: 5\n" May 6 17:40:48.272: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 17:40:48.272: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 17:40:48.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 17:40:49.166: INFO: stderr: "I0506 17:40:48.397983 796 log.go:172] (0xc000a098c0) (0xc000a506e0) Create stream\nI0506 17:40:48.398032 796 log.go:172] (0xc000a098c0) (0xc000a506e0) Stream added, broadcasting: 1\nI0506 17:40:48.402207 796 log.go:172] (0xc000a098c0) Reply frame received for 1\nI0506 17:40:48.402267 796 log.go:172] (0xc000a098c0) (0xc000641680) Create stream\nI0506 17:40:48.402292 796 log.go:172] (0xc000a098c0) (0xc000641680) Stream added, broadcasting: 3\nI0506 17:40:48.403236 796 log.go:172] (0xc000a098c0) Reply frame received for 3\nI0506 17:40:48.403290 796 log.go:172] (0xc000a098c0) (0xc0004c6aa0) Create stream\nI0506 17:40:48.403303 796 log.go:172] (0xc000a098c0) (0xc0004c6aa0) Stream added, broadcasting: 5\nI0506 17:40:48.404228 796 log.go:172] (0xc000a098c0) Reply frame received for 5\nI0506 17:40:48.467791 796 log.go:172] (0xc000a098c0) Data frame received for 5\nI0506 17:40:48.467815 796 log.go:172] (0xc0004c6aa0) (5) Data frame handling\nI0506 17:40:48.467828 796 log.go:172] (0xc0004c6aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 17:40:49.158604 796 log.go:172] (0xc000a098c0) Data frame received for 3\nI0506 17:40:49.158640 796 log.go:172] (0xc000641680) (3) Data frame handling\nI0506 17:40:49.158654 796 log.go:172] (0xc000641680) (3) Data frame sent\nI0506 17:40:49.158776 796 log.go:172] (0xc000a098c0) Data frame received for 5\nI0506 17:40:49.158791 796 log.go:172] (0xc0004c6aa0) (5) Data frame handling\nI0506 17:40:49.159019 796 log.go:172] (0xc000a098c0) Data frame received for 3\nI0506 17:40:49.159033 796 log.go:172] (0xc000641680) (3) Data frame handling\nI0506 17:40:49.160871 796 log.go:172] (0xc000a098c0) Data frame received for 1\nI0506 17:40:49.160886 796 log.go:172] (0xc000a506e0) (1) Data frame handling\nI0506 17:40:49.160899 796 log.go:172] (0xc000a506e0) (1) Data frame sent\nI0506 17:40:49.160912 796 log.go:172] (0xc000a098c0) (0xc000a506e0) Stream removed, broadcasting: 1\nI0506 17:40:49.161297 796 log.go:172] (0xc000a098c0) Go away received\nI0506 17:40:49.161341 796 log.go:172] (0xc000a098c0) (0xc000a506e0) Stream removed, broadcasting: 1\nI0506 17:40:49.161362 796 log.go:172] (0xc000a098c0) (0xc000641680) Stream removed, broadcasting: 3\nI0506 17:40:49.161376 796 log.go:172] (0xc000a098c0) (0xc0004c6aa0) Stream removed, broadcasting: 5\n" May 6 17:40:49.166: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 17:40:49.166: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 17:40:49.166: INFO: Waiting for statefulset status.replicas updated to 0 May 6 17:40:49.209: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 6 17:40:59.292: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 17:40:59.292: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 6 17:40:59.292: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 6 17:40:59.458: INFO: POD NODE PHASE GRACE CONDITIONS May 6 17:40:59.458: INFO: ss-0 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:12 +0000 UTC }] May 6 17:40:59.458: INFO: ss-1 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:40:59.458: INFO: ss-2 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:40:59.458: INFO: May 6 17:40:59.458: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 17:41:01.115: INFO: POD NODE PHASE GRACE CONDITIONS May 6 17:41:01.115: INFO: ss-0 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:12 +0000 UTC }] May 6 17:41:01.115: INFO: ss-1 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:41:01.115: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:41:01.115: INFO: May 6 17:41:01.115: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 17:41:03.133: INFO: POD NODE PHASE GRACE CONDITIONS May 6 17:41:03.133: INFO: ss-0 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:12 +0000 UTC }] May 6 17:41:03.133: INFO: ss-1 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:41:03.133: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:41:03.133: INFO: May 6 17:41:03.133: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 17:41:04.213: INFO: POD NODE PHASE GRACE CONDITIONS May 6 17:41:04.213: INFO: ss-0 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:12 +0000 UTC }] May 6 17:41:04.213: INFO: ss-1 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:41:04.213: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:41:04.213: INFO: May 6 17:41:04.213: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 17:41:05.219: INFO: POD NODE PHASE GRACE CONDITIONS May 6 17:41:05.219: INFO: ss-0 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:12 +0000 UTC }] May 6 17:41:05.219: INFO: ss-1 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:41:05.219: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:41:05.219: INFO: May 6 17:41:05.219: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 17:41:06.319: INFO: POD NODE PHASE GRACE CONDITIONS May 6 17:41:06.320: INFO: ss-0 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:12 +0000 UTC }] May 6 17:41:06.320: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:41:06.320: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:41:06.320: INFO: May 6 17:41:06.320: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 17:41:07.325: INFO: POD NODE PHASE GRACE CONDITIONS May 6 17:41:07.325: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:12 +0000 UTC }] May 6 17:41:07.325: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:41:07.325: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:41:07.326: INFO: May 6 17:41:07.326: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 17:41:08.331: INFO: POD NODE PHASE GRACE CONDITIONS May 6 17:41:08.331: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:12 +0000 UTC }] May 6 17:41:08.331: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:41:08.331: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:41:08.331: INFO: May 6 17:41:08.331: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 17:41:09.336: INFO: POD NODE PHASE GRACE CONDITIONS May 6 17:41:09.336: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:12 +0000 UTC }] May 6 17:41:09.336: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:41:09.337: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 17:40:32 +0000 UTC }] May 6 17:41:09.337: INFO: May 6 17:41:09.337: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4769 May 6 17:41:10.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:41:10.471: INFO: rc: 1 May 6 17:41:10.471: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 6 17:41:20.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:41:20.567: INFO: rc: 1 May 6 17:41:20.567: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:41:30.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:41:30.691: INFO: rc: 1 May 6 17:41:30.691: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:41:40.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:41:40.873: INFO: rc: 1 May 6 17:41:40.873: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:41:50.873: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:41:50.973: INFO: rc: 1 May 6 17:41:50.973: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:42:00.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:42:01.084: INFO: rc: 1 May 6 17:42:01.084: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:42:11.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:42:11.180: INFO: rc: 1 May 6 17:42:11.180: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:42:21.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:42:21.284: INFO: rc: 1 May 6 17:42:21.284: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:42:31.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:42:31.384: INFO: rc: 1 May 6 17:42:31.384: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:42:41.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:42:41.479: INFO: rc: 1 May 6 17:42:41.479: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:42:51.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:42:51.612: INFO: rc: 1 May 6 17:42:51.612: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:43:01.612: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:43:01.718: INFO: rc: 1 May 6 17:43:01.718: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:43:11.718: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:43:12.010: INFO: rc: 1 May 6 17:43:12.010: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:43:22.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:43:22.115: INFO: rc: 1 May 6 17:43:22.115: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:43:32.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:43:32.211: INFO: rc: 1 May 6 17:43:32.211: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:43:42.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:43:42.315: INFO: rc: 1 May 6 17:43:42.315: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:43:52.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:43:52.420: INFO: rc: 1 May 6 17:43:52.420: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:44:02.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:44:02.794: INFO: rc: 1 May 6 17:44:02.794: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:44:12.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:44:12.913: INFO: rc: 1 May 6 17:44:12.913: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:44:22.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:44:23.007: INFO: rc: 1 May 6 17:44:23.007: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:44:33.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:44:33.118: INFO: rc: 1 May 6 17:44:33.118: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:44:43.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:44:43.211: INFO: rc: 1 May 6 17:44:43.211: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:44:53.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:44:53.307: INFO: rc: 1 May 6 17:44:53.307: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:45:03.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:45:03.412: INFO: rc: 1 May 6 17:45:03.412: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:45:13.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:45:13.516: INFO: rc: 1 May 6 17:45:13.516: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:45:23.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:45:23.645: INFO: rc: 1 May 6 17:45:23.645: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:45:33.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:45:33.771: INFO: rc: 1 May 6 17:45:33.771: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:45:43.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:45:44.012: INFO: rc: 1 May 6 17:45:44.012: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:45:54.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:45:54.107: INFO: rc: 1 May 6 17:45:54.107: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:46:04.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:46:04.208: INFO: rc: 1 May 6 17:46:04.208: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 17:46:14.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4769 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 17:46:14.727: INFO: rc: 1 May 6 17:46:14.727: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: May 6 17:46:14.727: INFO: Scaling statefulset ss to 0 May 6 17:46:14.742: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 6 17:46:14.756: INFO: Deleting all statefulset in ns statefulset-4769 May 6 17:46:14.758: INFO: Scaling statefulset ss to 0 May 6 17:46:14.768: INFO: Waiting for statefulset status.replicas updated to 0 May 6 17:46:14.770: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:46:14.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4769" for this suite. • [SLOW TEST:362.904 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":32,"skipped":569,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:46:14.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0506 17:46:55.772688 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 17:46:55.772: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:46:55.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3642" for this suite. • [SLOW TEST:40.865 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":33,"skipped":588,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:46:55.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:47:09.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4454" for this suite. • [SLOW TEST:14.134 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":34,"skipped":610,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:47:09.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 6 17:47:16.873: INFO: 10 pods remaining May 6 17:47:16.873: INFO: 10 pods has nil DeletionTimestamp May 6 17:47:16.873: INFO: May 6 17:47:19.175: INFO: 10 pods remaining May 6 17:47:19.175: INFO: 1 pods has nil DeletionTimestamp May 6 17:47:19.175: INFO: May 6 17:47:21.730: INFO: 0 pods remaining May 6 17:47:21.730: INFO: 0 pods has nil DeletionTimestamp May 6 17:47:21.730: INFO: May 6 17:47:23.331: INFO: 0 pods remaining May 6 17:47:23.331: INFO: 0 pods has nil DeletionTimestamp May 6 17:47:23.331: INFO: STEP: Gathering metrics W0506 17:47:24.468396 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 17:47:24.468: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:47:24.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-442" for this suite. • [SLOW TEST:14.588 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":35,"skipped":654,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:47:24.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-bj5j STEP: Creating a pod to test atomic-volume-subpath May 6 17:47:25.859: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-bj5j" in namespace "subpath-7356" to be "Succeeded or Failed" May 6 17:47:26.069: INFO: Pod "pod-subpath-test-projected-bj5j": Phase="Pending", Reason="", readiness=false. Elapsed: 210.013702ms May 6 17:47:28.073: INFO: Pod "pod-subpath-test-projected-bj5j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21386909s May 6 17:47:30.249: INFO: Pod "pod-subpath-test-projected-bj5j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.390355086s May 6 17:47:32.254: INFO: Pod "pod-subpath-test-projected-bj5j": Phase="Running", Reason="", readiness=true. Elapsed: 6.394396804s May 6 17:47:34.258: INFO: Pod "pod-subpath-test-projected-bj5j": Phase="Running", Reason="", readiness=true. Elapsed: 8.398574786s May 6 17:47:36.262: INFO: Pod "pod-subpath-test-projected-bj5j": Phase="Running", Reason="", readiness=true. Elapsed: 10.402488869s May 6 17:47:38.266: INFO: Pod "pod-subpath-test-projected-bj5j": Phase="Running", Reason="", readiness=true. Elapsed: 12.407262587s May 6 17:47:40.271: INFO: Pod "pod-subpath-test-projected-bj5j": Phase="Running", Reason="", readiness=true. Elapsed: 14.41140599s May 6 17:47:42.275: INFO: Pod "pod-subpath-test-projected-bj5j": Phase="Running", Reason="", readiness=true. Elapsed: 16.416332739s May 6 17:47:44.279: INFO: Pod "pod-subpath-test-projected-bj5j": Phase="Running", Reason="", readiness=true. Elapsed: 18.420276862s May 6 17:47:46.283: INFO: Pod "pod-subpath-test-projected-bj5j": Phase="Running", Reason="", readiness=true. Elapsed: 20.424114122s May 6 17:47:48.288: INFO: Pod "pod-subpath-test-projected-bj5j": Phase="Running", Reason="", readiness=true. Elapsed: 22.428673195s May 6 17:47:50.292: INFO: Pod "pod-subpath-test-projected-bj5j": Phase="Running", Reason="", readiness=true. Elapsed: 24.432945702s May 6 17:47:52.297: INFO: Pod "pod-subpath-test-projected-bj5j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.438206585s STEP: Saw pod success May 6 17:47:52.297: INFO: Pod "pod-subpath-test-projected-bj5j" satisfied condition "Succeeded or Failed" May 6 17:47:52.300: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-projected-bj5j container test-container-subpath-projected-bj5j: STEP: delete the pod May 6 17:47:52.364: INFO: Waiting for pod pod-subpath-test-projected-bj5j to disappear May 6 17:47:52.374: INFO: Pod pod-subpath-test-projected-bj5j no longer exists STEP: Deleting pod pod-subpath-test-projected-bj5j May 6 17:47:52.374: INFO: Deleting pod "pod-subpath-test-projected-bj5j" in namespace "subpath-7356" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:47:52.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7356" for this suite. • [SLOW TEST:27.884 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":36,"skipped":672,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:47:52.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:47:52.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1035" for this suite. STEP: Destroying namespace "nspatchtest-f4005751-204e-49b9-9c93-fca78cf54c02-833" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":37,"skipped":687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:47:52.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 17:47:53.403: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 17:47:55.412: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384073, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384073, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384073, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384073, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 17:47:57.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384073, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384073, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384073, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384073, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 17:48:00.515: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 6 17:48:00.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6218-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:48:01.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3030" for this suite. STEP: Destroying namespace "webhook-3030-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.052 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":38,"skipped":714,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:48:01.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 17:48:03.280: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 17:48:05.417: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384083, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384083, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384083, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384082, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 17:48:07.421: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384083, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384083, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384083, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384082, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 17:48:10.444: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:48:10.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2936" for this suite. STEP: Destroying namespace "webhook-2936-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.070 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":39,"skipped":759,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:48:10.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-1493 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1493 STEP: Deleting pre-stop pod May 6 17:48:24.082: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:48:24.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1493" for this suite. • [SLOW TEST:13.319 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":40,"skipped":764,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:48:24.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 6 17:48:24.587: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac4c86a2-713b-436d-beab-6b07d96cc89a" in namespace "projected-274" to be "Succeeded or Failed" May 6 17:48:24.591: INFO: Pod "downwardapi-volume-ac4c86a2-713b-436d-beab-6b07d96cc89a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.425789ms May 6 17:48:26.595: INFO: Pod "downwardapi-volume-ac4c86a2-713b-436d-beab-6b07d96cc89a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00821002s May 6 17:48:28.712: INFO: Pod "downwardapi-volume-ac4c86a2-713b-436d-beab-6b07d96cc89a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.124797566s STEP: Saw pod success May 6 17:48:28.712: INFO: Pod "downwardapi-volume-ac4c86a2-713b-436d-beab-6b07d96cc89a" satisfied condition "Succeeded or Failed" May 6 17:48:28.715: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-ac4c86a2-713b-436d-beab-6b07d96cc89a container client-container: STEP: delete the pod May 6 17:48:28.820: INFO: Waiting for pod downwardapi-volume-ac4c86a2-713b-436d-beab-6b07d96cc89a to disappear May 6 17:48:28.831: INFO: Pod downwardapi-volume-ac4c86a2-713b-436d-beab-6b07d96cc89a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:48:28.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-274" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":765,"failed":0} SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:48:28.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 6 17:48:32.924: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:48:32.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5138" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":42,"skipped":770,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:48:32.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 6 17:48:33.085: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f5ea15aa-fe48-4259-9dc8-f043a2f1ea23" in namespace "projected-9454" to be "Succeeded or Failed" May 6 17:48:33.096: INFO: Pod "downwardapi-volume-f5ea15aa-fe48-4259-9dc8-f043a2f1ea23": Phase="Pending", Reason="", readiness=false. Elapsed: 11.134336ms May 6 17:48:35.111: INFO: Pod "downwardapi-volume-f5ea15aa-fe48-4259-9dc8-f043a2f1ea23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026025486s May 6 17:48:37.116: INFO: Pod "downwardapi-volume-f5ea15aa-fe48-4259-9dc8-f043a2f1ea23": Phase="Running", Reason="", readiness=true. Elapsed: 4.030689411s May 6 17:48:39.120: INFO: Pod "downwardapi-volume-f5ea15aa-fe48-4259-9dc8-f043a2f1ea23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035468541s STEP: Saw pod success May 6 17:48:39.121: INFO: Pod "downwardapi-volume-f5ea15aa-fe48-4259-9dc8-f043a2f1ea23" satisfied condition "Succeeded or Failed" May 6 17:48:39.124: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-f5ea15aa-fe48-4259-9dc8-f043a2f1ea23 container client-container: STEP: delete the pod May 6 17:48:39.160: INFO: Waiting for pod downwardapi-volume-f5ea15aa-fe48-4259-9dc8-f043a2f1ea23 to disappear May 6 17:48:39.166: INFO: Pod downwardapi-volume-f5ea15aa-fe48-4259-9dc8-f043a2f1ea23 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:48:39.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9454" for this suite. • [SLOW TEST:6.193 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":798,"failed":0} SS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:48:39.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 6 17:48:39.251: INFO: Waiting up to 5m0s for pod "downward-api-316b6f8c-8a1c-4114-aa60-45387e0e9dcb" in namespace "downward-api-283" to be "Succeeded or Failed" May 6 17:48:39.276: INFO: Pod "downward-api-316b6f8c-8a1c-4114-aa60-45387e0e9dcb": Phase="Pending", Reason="", readiness=false. Elapsed: 24.666653ms May 6 17:48:41.280: INFO: Pod "downward-api-316b6f8c-8a1c-4114-aa60-45387e0e9dcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028710042s May 6 17:48:43.375: INFO: Pod "downward-api-316b6f8c-8a1c-4114-aa60-45387e0e9dcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123693728s STEP: Saw pod success May 6 17:48:43.375: INFO: Pod "downward-api-316b6f8c-8a1c-4114-aa60-45387e0e9dcb" satisfied condition "Succeeded or Failed" May 6 17:48:43.378: INFO: Trying to get logs from node kali-worker2 pod downward-api-316b6f8c-8a1c-4114-aa60-45387e0e9dcb container dapi-container: STEP: delete the pod May 6 17:48:43.424: INFO: Waiting for pod downward-api-316b6f8c-8a1c-4114-aa60-45387e0e9dcb to disappear May 6 17:48:43.472: INFO: Pod downward-api-316b6f8c-8a1c-4114-aa60-45387e0e9dcb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:48:43.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-283" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":800,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:48:43.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium May 6 17:48:43.696: INFO: Waiting up to 5m0s for pod "pod-852a5c92-5f6e-4c0c-805d-8ea6f3a35a03" in namespace "emptydir-9474" to be "Succeeded or Failed" May 6 17:48:43.882: INFO: Pod "pod-852a5c92-5f6e-4c0c-805d-8ea6f3a35a03": Phase="Pending", Reason="", readiness=false. Elapsed: 185.935417ms May 6 17:48:45.885: INFO: Pod "pod-852a5c92-5f6e-4c0c-805d-8ea6f3a35a03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189135033s May 6 17:48:47.890: INFO: Pod "pod-852a5c92-5f6e-4c0c-805d-8ea6f3a35a03": Phase="Running", Reason="", readiness=true. Elapsed: 4.193966136s May 6 17:48:49.894: INFO: Pod "pod-852a5c92-5f6e-4c0c-805d-8ea6f3a35a03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.19767216s STEP: Saw pod success May 6 17:48:49.894: INFO: Pod "pod-852a5c92-5f6e-4c0c-805d-8ea6f3a35a03" satisfied condition "Succeeded or Failed" May 6 17:48:49.896: INFO: Trying to get logs from node kali-worker2 pod pod-852a5c92-5f6e-4c0c-805d-8ea6f3a35a03 container test-container: STEP: delete the pod May 6 17:48:49.925: INFO: Waiting for pod pod-852a5c92-5f6e-4c0c-805d-8ea6f3a35a03 to disappear May 6 17:48:49.961: INFO: Pod pod-852a5c92-5f6e-4c0c-805d-8ea6f3a35a03 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:48:49.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9474" for this suite. • [SLOW TEST:6.428 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":803,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:48:49.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-cecf57f5-8c41-4d8d-b512-5828ea492a9c STEP: Creating a pod to test consume configMaps May 6 17:48:50.099: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f089af4d-72d7-48ea-bf91-16593a25d2f4" in namespace "projected-6520" to be "Succeeded or Failed" May 6 17:48:50.119: INFO: Pod "pod-projected-configmaps-f089af4d-72d7-48ea-bf91-16593a25d2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.421615ms May 6 17:48:52.123: INFO: Pod "pod-projected-configmaps-f089af4d-72d7-48ea-bf91-16593a25d2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024206927s May 6 17:48:54.128: INFO: Pod "pod-projected-configmaps-f089af4d-72d7-48ea-bf91-16593a25d2f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028917273s STEP: Saw pod success May 6 17:48:54.128: INFO: Pod "pod-projected-configmaps-f089af4d-72d7-48ea-bf91-16593a25d2f4" satisfied condition "Succeeded or Failed" May 6 17:48:54.132: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-f089af4d-72d7-48ea-bf91-16593a25d2f4 container projected-configmap-volume-test: STEP: delete the pod May 6 17:48:54.183: INFO: Waiting for pod pod-projected-configmaps-f089af4d-72d7-48ea-bf91-16593a25d2f4 to disappear May 6 17:48:54.219: INFO: Pod pod-projected-configmaps-f089af4d-72d7-48ea-bf91-16593a25d2f4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:48:54.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6520" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":814,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:48:54.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 6 17:48:54.766: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 6 17:48:54.785: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:48:54.801: INFO: Number of nodes with available pods: 0 May 6 17:48:54.801: INFO: Node kali-worker is running more than one daemon pod May 6 17:48:55.807: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:48:55.811: INFO: Number of nodes with available pods: 0 May 6 17:48:55.811: INFO: Node kali-worker is running more than one daemon pod May 6 17:48:56.806: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:48:56.810: INFO: Number of nodes with available pods: 0 May 6 17:48:56.810: INFO: Node kali-worker is running more than one daemon pod May 6 17:48:57.805: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:48:57.808: INFO: Number of nodes with available pods: 0 May 6 17:48:57.808: INFO: Node kali-worker is running more than one daemon pod May 6 17:48:58.807: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:48:58.812: INFO: Number of nodes with available pods: 1 May 6 17:48:58.812: INFO: Node kali-worker is running more than one daemon pod May 6 17:48:59.807: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:48:59.810: INFO: Number of nodes with available pods: 2 May 6 17:48:59.810: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 6 17:48:59.863: INFO: Wrong image for pod: daemon-set-4chf2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:48:59.864: INFO: Wrong image for pod: daemon-set-7qrhf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:48:59.939: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:00.967: INFO: Wrong image for pod: daemon-set-4chf2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:49:00.967: INFO: Wrong image for pod: daemon-set-7qrhf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:49:00.971: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:01.944: INFO: Wrong image for pod: daemon-set-4chf2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:49:01.944: INFO: Wrong image for pod: daemon-set-7qrhf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:49:01.949: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:02.944: INFO: Wrong image for pod: daemon-set-4chf2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:49:02.944: INFO: Pod daemon-set-4chf2 is not available May 6 17:49:02.944: INFO: Wrong image for pod: daemon-set-7qrhf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:49:02.947: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:03.968: INFO: Wrong image for pod: daemon-set-7qrhf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:49:03.968: INFO: Pod daemon-set-7sf49 is not available May 6 17:49:03.972: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:04.943: INFO: Wrong image for pod: daemon-set-7qrhf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:49:04.943: INFO: Pod daemon-set-7sf49 is not available May 6 17:49:04.946: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:05.960: INFO: Wrong image for pod: daemon-set-7qrhf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:49:05.960: INFO: Pod daemon-set-7sf49 is not available May 6 17:49:05.965: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:07.010: INFO: Wrong image for pod: daemon-set-7qrhf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:49:07.014: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:08.059: INFO: Wrong image for pod: daemon-set-7qrhf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:49:08.066: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:08.946: INFO: Wrong image for pod: daemon-set-7qrhf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:49:08.946: INFO: Pod daemon-set-7qrhf is not available May 6 17:49:08.951: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:09.945: INFO: Wrong image for pod: daemon-set-7qrhf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:49:09.945: INFO: Pod daemon-set-7qrhf is not available May 6 17:49:09.948: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:10.944: INFO: Wrong image for pod: daemon-set-7qrhf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:49:10.944: INFO: Pod daemon-set-7qrhf is not available May 6 17:49:10.948: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:11.944: INFO: Wrong image for pod: daemon-set-7qrhf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:49:11.944: INFO: Pod daemon-set-7qrhf is not available May 6 17:49:11.948: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:12.944: INFO: Wrong image for pod: daemon-set-7qrhf. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. May 6 17:49:12.944: INFO: Pod daemon-set-7qrhf is not available May 6 17:49:12.948: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:14.140: INFO: Pod daemon-set-xvxzd is not available May 6 17:49:14.287: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 6 17:49:14.313: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:14.360: INFO: Number of nodes with available pods: 1 May 6 17:49:14.360: INFO: Node kali-worker is running more than one daemon pod May 6 17:49:15.365: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:15.368: INFO: Number of nodes with available pods: 1 May 6 17:49:15.368: INFO: Node kali-worker is running more than one daemon pod May 6 17:49:16.400: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:16.403: INFO: Number of nodes with available pods: 1 May 6 17:49:16.403: INFO: Node kali-worker is running more than one daemon pod May 6 17:49:17.376: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:17.399: INFO: Number of nodes with available pods: 1 May 6 17:49:17.399: INFO: Node kali-worker is running more than one daemon pod May 6 17:49:18.379: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 17:49:18.382: INFO: Number of nodes with available pods: 2 May 6 17:49:18.382: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9300, will wait for the garbage collector to delete the pods May 6 17:49:18.455: INFO: Deleting DaemonSet.extensions daemon-set took: 5.697028ms May 6 17:49:18.756: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.210679ms May 6 17:49:23.758: INFO: Number of nodes with available pods: 0 May 6 17:49:23.758: INFO: Number of running nodes: 0, number of available pods: 0 May 6 17:49:23.761: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9300/daemonsets","resourceVersion":"2051378"},"items":null} May 6 17:49:23.764: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9300/pods","resourceVersion":"2051378"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:49:23.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9300" for this suite. • [SLOW TEST:29.553 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":47,"skipped":817,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:49:23.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 6 17:49:24.824: INFO: Pod name wrapped-volume-race-239563da-934b-40a0-a187-b0f7c8eacaac: Found 0 pods out of 5 May 6 17:49:29.832: INFO: Pod name wrapped-volume-race-239563da-934b-40a0-a187-b0f7c8eacaac: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-239563da-934b-40a0-a187-b0f7c8eacaac in namespace emptydir-wrapper-2112, will wait for the garbage collector to delete the pods May 6 17:49:45.924: INFO: Deleting ReplicationController wrapped-volume-race-239563da-934b-40a0-a187-b0f7c8eacaac took: 14.391143ms May 6 17:49:46.225: INFO: Terminating ReplicationController wrapped-volume-race-239563da-934b-40a0-a187-b0f7c8eacaac pods took: 300.276843ms STEP: Creating RC which spawns configmap-volume pods May 6 17:50:03.963: INFO: Pod name wrapped-volume-race-f3bc0368-8988-447b-a732-304b3ed662cb: Found 0 pods out of 5 May 6 17:50:08.997: INFO: Pod name wrapped-volume-race-f3bc0368-8988-447b-a732-304b3ed662cb: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f3bc0368-8988-447b-a732-304b3ed662cb in namespace emptydir-wrapper-2112, will wait for the garbage collector to delete the pods May 6 17:50:27.508: INFO: Deleting ReplicationController wrapped-volume-race-f3bc0368-8988-447b-a732-304b3ed662cb took: 38.985671ms May 6 17:50:27.808: INFO: Terminating ReplicationController wrapped-volume-race-f3bc0368-8988-447b-a732-304b3ed662cb pods took: 300.292833ms STEP: Creating RC which spawns configmap-volume pods May 6 17:50:43.679: INFO: Pod name wrapped-volume-race-2fc1c5ca-f436-4017-af1e-af7234a7aafc: Found 0 pods out of 5 May 6 17:50:48.714: INFO: Pod name wrapped-volume-race-2fc1c5ca-f436-4017-af1e-af7234a7aafc: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2fc1c5ca-f436-4017-af1e-af7234a7aafc in namespace emptydir-wrapper-2112, will wait for the garbage collector to delete the pods May 6 17:51:03.971: INFO: Deleting ReplicationController wrapped-volume-race-2fc1c5ca-f436-4017-af1e-af7234a7aafc took: 177.991409ms May 6 17:51:04.671: INFO: Terminating ReplicationController wrapped-volume-race-2fc1c5ca-f436-4017-af1e-af7234a7aafc pods took: 700.329201ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:51:16.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2112" for this suite. • [SLOW TEST:112.380 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":48,"skipped":837,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:51:16.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs May 6 17:51:16.309: INFO: Waiting up to 5m0s for pod "pod-63ef549c-424b-4cef-80be-570014e0c7b5" in namespace "emptydir-5493" to be "Succeeded or Failed" May 6 17:51:16.318: INFO: Pod "pod-63ef549c-424b-4cef-80be-570014e0c7b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.838828ms May 6 17:51:18.322: INFO: Pod "pod-63ef549c-424b-4cef-80be-570014e0c7b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013083939s May 6 17:51:20.326: INFO: Pod "pod-63ef549c-424b-4cef-80be-570014e0c7b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016515794s STEP: Saw pod success May 6 17:51:20.326: INFO: Pod "pod-63ef549c-424b-4cef-80be-570014e0c7b5" satisfied condition "Succeeded or Failed" May 6 17:51:20.328: INFO: Trying to get logs from node kali-worker2 pod pod-63ef549c-424b-4cef-80be-570014e0c7b5 container test-container: STEP: delete the pod May 6 17:51:20.384: INFO: Waiting for pod pod-63ef549c-424b-4cef-80be-570014e0c7b5 to disappear May 6 17:51:20.414: INFO: Pod pod-63ef549c-424b-4cef-80be-570014e0c7b5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:51:20.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5493" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":860,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:51:20.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 6 17:51:20.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9018d1cc-b8ca-48ea-a403-6de3799da768" in namespace "downward-api-853" to be "Succeeded or Failed" May 6 17:51:20.950: INFO: Pod "downwardapi-volume-9018d1cc-b8ca-48ea-a403-6de3799da768": Phase="Pending", Reason="", readiness=false. Elapsed: 36.90039ms May 6 17:51:23.223: INFO: Pod "downwardapi-volume-9018d1cc-b8ca-48ea-a403-6de3799da768": Phase="Pending", Reason="", readiness=false. Elapsed: 2.309443069s May 6 17:51:25.399: INFO: Pod "downwardapi-volume-9018d1cc-b8ca-48ea-a403-6de3799da768": Phase="Pending", Reason="", readiness=false. Elapsed: 4.485470893s May 6 17:51:27.447: INFO: Pod "downwardapi-volume-9018d1cc-b8ca-48ea-a403-6de3799da768": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.534144198s STEP: Saw pod success May 6 17:51:27.447: INFO: Pod "downwardapi-volume-9018d1cc-b8ca-48ea-a403-6de3799da768" satisfied condition "Succeeded or Failed" May 6 17:51:27.484: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-9018d1cc-b8ca-48ea-a403-6de3799da768 container client-container: STEP: delete the pod May 6 17:51:27.630: INFO: Waiting for pod downwardapi-volume-9018d1cc-b8ca-48ea-a403-6de3799da768 to disappear May 6 17:51:27.675: INFO: Pod downwardapi-volume-9018d1cc-b8ca-48ea-a403-6de3799da768 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:51:27.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-853" for this suite. • [SLOW TEST:7.055 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":894,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:51:27.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-c061fcb1-7565-4018-bbf6-fe6fa6a02d72 STEP: Creating a pod to test consume configMaps May 6 17:51:27.873: INFO: Waiting up to 5m0s for pod "pod-configmaps-5a5d046b-4d58-4eb1-b0c8-66b0a1565090" in namespace "configmap-6340" to be "Succeeded or Failed" May 6 17:51:27.876: INFO: Pod "pod-configmaps-5a5d046b-4d58-4eb1-b0c8-66b0a1565090": Phase="Pending", Reason="", readiness=false. Elapsed: 2.701541ms May 6 17:51:29.969: INFO: Pod "pod-configmaps-5a5d046b-4d58-4eb1-b0c8-66b0a1565090": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095995772s May 6 17:51:32.048: INFO: Pod "pod-configmaps-5a5d046b-4d58-4eb1-b0c8-66b0a1565090": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174143929s May 6 17:51:34.288: INFO: Pod "pod-configmaps-5a5d046b-4d58-4eb1-b0c8-66b0a1565090": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414843512s May 6 17:51:36.389: INFO: Pod "pod-configmaps-5a5d046b-4d58-4eb1-b0c8-66b0a1565090": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.515502527s STEP: Saw pod success May 6 17:51:36.389: INFO: Pod "pod-configmaps-5a5d046b-4d58-4eb1-b0c8-66b0a1565090" satisfied condition "Succeeded or Failed" May 6 17:51:36.426: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-5a5d046b-4d58-4eb1-b0c8-66b0a1565090 container configmap-volume-test: STEP: delete the pod May 6 17:51:36.995: INFO: Waiting for pod pod-configmaps-5a5d046b-4d58-4eb1-b0c8-66b0a1565090 to disappear May 6 17:51:37.275: INFO: Pod pod-configmaps-5a5d046b-4d58-4eb1-b0c8-66b0a1565090 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:51:37.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6340" for this suite. • [SLOW TEST:9.748 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":897,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:51:37.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:52:26.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1508" for this suite. • [SLOW TEST:49.338 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":980,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:52:26.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 6 17:52:26.925: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:52:30.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8639" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":994,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:52:31.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:52:37.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7014" for this suite. • [SLOW TEST:6.597 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":54,"skipped":1003,"failed":0} [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:52:37.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 6 17:52:37.691: INFO: Created pod &Pod{ObjectMeta:{dns-2739 dns-2739 /api/v1/namespaces/dns-2739/pods/dns-2739 dc9798a6-dff7-4fe1-b638-c7989e16fdf5 2053041 0 2020-05-06 17:52:37 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-06 17:52:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w2lht,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w2lht,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w2lht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 17:52:37.702: INFO: The status of Pod dns-2739 is Pending, waiting for it to be Running (with Ready = true) May 6 17:52:39.755: INFO: The status of Pod dns-2739 is Pending, waiting for it to be Running (with Ready = true) May 6 17:52:41.707: INFO: The status of Pod dns-2739 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 6 17:52:41.707: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2739 PodName:dns-2739 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 17:52:41.707: INFO: >>> kubeConfig: /root/.kube/config I0506 17:52:41.734088 7 log.go:172] (0xc0030102c0) (0xc001e9b540) Create stream I0506 17:52:41.734115 7 log.go:172] (0xc0030102c0) (0xc001e9b540) Stream added, broadcasting: 1 I0506 17:52:41.735888 7 log.go:172] (0xc0030102c0) Reply frame received for 1 I0506 17:52:41.735920 7 log.go:172] (0xc0030102c0) (0xc001e9b720) Create stream I0506 17:52:41.735931 7 log.go:172] (0xc0030102c0) (0xc001e9b720) Stream added, broadcasting: 3 I0506 17:52:41.736617 7 log.go:172] (0xc0030102c0) Reply frame received for 3 I0506 17:52:41.736643 7 log.go:172] (0xc0030102c0) (0xc001e9b7c0) Create stream I0506 17:52:41.736654 7 log.go:172] (0xc0030102c0) (0xc001e9b7c0) Stream added, broadcasting: 5 I0506 17:52:41.737556 7 log.go:172] (0xc0030102c0) Reply frame received for 5 I0506 17:52:41.924988 7 log.go:172] (0xc0030102c0) Data frame received for 3 I0506 17:52:41.925033 7 log.go:172] (0xc001e9b720) (3) Data frame handling I0506 17:52:41.925053 7 log.go:172] (0xc001e9b720) (3) Data frame sent I0506 17:52:41.926465 7 log.go:172] (0xc0030102c0) Data frame received for 5 I0506 17:52:41.926497 7 log.go:172] (0xc001e9b7c0) (5) Data frame handling I0506 17:52:41.926522 7 log.go:172] (0xc0030102c0) Data frame received for 3 I0506 17:52:41.926537 7 log.go:172] (0xc001e9b720) (3) Data frame handling I0506 17:52:41.928473 7 log.go:172] (0xc0030102c0) Data frame received for 1 I0506 17:52:41.928488 7 log.go:172] (0xc001e9b540) (1) Data frame handling I0506 17:52:41.928495 7 log.go:172] (0xc001e9b540) (1) Data frame sent I0506 17:52:41.928512 7 log.go:172] (0xc0030102c0) (0xc001e9b540) Stream removed, broadcasting: 1 I0506 17:52:41.928762 7 log.go:172] (0xc0030102c0) Go away received I0506 17:52:41.928785 7 log.go:172] (0xc0030102c0) (0xc001e9b540) Stream removed, broadcasting: 1 I0506 17:52:41.928801 7 log.go:172] (0xc0030102c0) (0xc001e9b720) Stream removed, broadcasting: 3 I0506 17:52:41.928809 7 log.go:172] (0xc0030102c0) (0xc001e9b7c0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 6 17:52:41.928: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2739 PodName:dns-2739 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 17:52:41.928: INFO: >>> kubeConfig: /root/.kube/config I0506 17:52:41.955939 7 log.go:172] (0xc00308e370) (0xc00172b7c0) Create stream I0506 17:52:41.955974 7 log.go:172] (0xc00308e370) (0xc00172b7c0) Stream added, broadcasting: 1 I0506 17:52:41.957858 7 log.go:172] (0xc00308e370) Reply frame received for 1 I0506 17:52:41.957902 7 log.go:172] (0xc00308e370) (0xc00172b900) Create stream I0506 17:52:41.957919 7 log.go:172] (0xc00308e370) (0xc00172b900) Stream added, broadcasting: 3 I0506 17:52:41.958651 7 log.go:172] (0xc00308e370) Reply frame received for 3 I0506 17:52:41.958675 7 log.go:172] (0xc00308e370) (0xc00227e460) Create stream I0506 17:52:41.958685 7 log.go:172] (0xc00308e370) (0xc00227e460) Stream added, broadcasting: 5 I0506 17:52:41.959432 7 log.go:172] (0xc00308e370) Reply frame received for 5 I0506 17:52:42.018850 7 log.go:172] (0xc00308e370) Data frame received for 3 I0506 17:52:42.018875 7 log.go:172] (0xc00172b900) (3) Data frame handling I0506 17:52:42.018891 7 log.go:172] (0xc00172b900) (3) Data frame sent I0506 17:52:42.019702 7 log.go:172] (0xc00308e370) Data frame received for 3 I0506 17:52:42.019716 7 log.go:172] (0xc00172b900) (3) Data frame handling I0506 17:52:42.019966 7 log.go:172] (0xc00308e370) Data frame received for 5 I0506 17:52:42.019999 7 log.go:172] (0xc00227e460) (5) Data frame handling I0506 17:52:42.021714 7 log.go:172] (0xc00308e370) Data frame received for 1 I0506 17:52:42.021730 7 log.go:172] (0xc00172b7c0) (1) Data frame handling I0506 17:52:42.021737 7 log.go:172] (0xc00172b7c0) (1) Data frame sent I0506 17:52:42.021744 7 log.go:172] (0xc00308e370) (0xc00172b7c0) Stream removed, broadcasting: 1 I0506 17:52:42.021780 7 log.go:172] (0xc00308e370) Go away received I0506 17:52:42.021810 7 log.go:172] (0xc00308e370) (0xc00172b7c0) Stream removed, broadcasting: 1 I0506 17:52:42.021822 7 log.go:172] (0xc00308e370) (0xc00172b900) Stream removed, broadcasting: 3 I0506 17:52:42.021828 7 log.go:172] (0xc00308e370) (0xc00227e460) Stream removed, broadcasting: 5 May 6 17:52:42.021: INFO: Deleting pod dns-2739... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:52:42.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2739" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":55,"skipped":1003,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:52:42.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium May 6 17:52:44.040: INFO: Waiting up to 5m0s for pod "pod-9b0fee4b-b545-45c6-9da0-0d56a25fad94" in namespace "emptydir-9175" to be "Succeeded or Failed" May 6 17:52:44.099: INFO: Pod "pod-9b0fee4b-b545-45c6-9da0-0d56a25fad94": Phase="Pending", Reason="", readiness=false. Elapsed: 58.468996ms May 6 17:52:46.491: INFO: Pod "pod-9b0fee4b-b545-45c6-9da0-0d56a25fad94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.450472873s May 6 17:52:48.593: INFO: Pod "pod-9b0fee4b-b545-45c6-9da0-0d56a25fad94": Phase="Running", Reason="", readiness=true. Elapsed: 4.553062478s May 6 17:52:50.598: INFO: Pod "pod-9b0fee4b-b545-45c6-9da0-0d56a25fad94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.557457935s STEP: Saw pod success May 6 17:52:50.598: INFO: Pod "pod-9b0fee4b-b545-45c6-9da0-0d56a25fad94" satisfied condition "Succeeded or Failed" May 6 17:52:50.600: INFO: Trying to get logs from node kali-worker2 pod pod-9b0fee4b-b545-45c6-9da0-0d56a25fad94 container test-container: STEP: delete the pod May 6 17:52:50.695: INFO: Waiting for pod pod-9b0fee4b-b545-45c6-9da0-0d56a25fad94 to disappear May 6 17:52:50.702: INFO: Pod pod-9b0fee4b-b545-45c6-9da0-0d56a25fad94 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:52:50.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9175" for this suite. • [SLOW TEST:8.351 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":1023,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:52:50.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:53:01.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8572" for this suite. • [SLOW TEST:10.526 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":1038,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:53:01.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod May 6 17:53:01.295: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:53:15.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6498" for this suite. • [SLOW TEST:14.183 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":58,"skipped":1050,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:53:15.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6955 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-6955 May 6 17:53:15.592: INFO: Found 0 stateful pods, waiting for 1 May 6 17:53:25.951: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 6 17:53:26.371: INFO: Deleting all statefulset in ns statefulset-6955 May 6 17:53:26.374: INFO: Scaling statefulset ss to 0 May 6 17:53:46.543: INFO: Waiting for statefulset status.replicas updated to 0 May 6 17:53:46.546: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:53:46.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6955" for this suite. • [SLOW TEST:31.137 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":59,"skipped":1076,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:53:46.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 6 17:53:46.646: INFO: Creating ReplicaSet my-hostname-basic-32692937-c77b-4649-ba20-7c1fe5285b32 May 6 17:53:46.764: INFO: Pod name my-hostname-basic-32692937-c77b-4649-ba20-7c1fe5285b32: Found 0 pods out of 1 May 6 17:53:51.815: INFO: Pod name my-hostname-basic-32692937-c77b-4649-ba20-7c1fe5285b32: Found 1 pods out of 1 May 6 17:53:51.815: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-32692937-c77b-4649-ba20-7c1fe5285b32" is running May 6 17:53:53.931: INFO: Pod "my-hostname-basic-32692937-c77b-4649-ba20-7c1fe5285b32-cjmxj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 17:53:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 17:53:47 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-32692937-c77b-4649-ba20-7c1fe5285b32]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 17:53:47 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-32692937-c77b-4649-ba20-7c1fe5285b32]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 17:53:46 +0000 UTC Reason: Message:}]) May 6 17:53:53.931: INFO: Trying to dial the pod May 6 17:53:58.942: INFO: Controller my-hostname-basic-32692937-c77b-4649-ba20-7c1fe5285b32: Got expected result from replica 1 [my-hostname-basic-32692937-c77b-4649-ba20-7c1fe5285b32-cjmxj]: "my-hostname-basic-32692937-c77b-4649-ba20-7c1fe5285b32-cjmxj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:53:58.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7380" for this suite. • [SLOW TEST:12.382 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":60,"skipped":1090,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:53:58.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components May 6 17:53:59.233: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 6 17:53:59.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1838' May 6 17:54:10.453: INFO: stderr: "" May 6 17:54:10.453: INFO: stdout: "service/agnhost-slave created\n" May 6 17:54:10.454: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 6 17:54:10.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1838' May 6 17:54:11.009: INFO: stderr: "" May 6 17:54:11.009: INFO: stdout: "service/agnhost-master created\n" May 6 17:54:11.009: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 6 17:54:11.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1838' May 6 17:54:11.650: INFO: stderr: "" May 6 17:54:11.650: INFO: stdout: "service/frontend created\n" May 6 17:54:11.651: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 6 17:54:11.651: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1838' May 6 17:54:12.394: INFO: stderr: "" May 6 17:54:12.394: INFO: stdout: "deployment.apps/frontend created\n" May 6 17:54:12.394: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 6 17:54:12.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1838' May 6 17:54:12.870: INFO: stderr: "" May 6 17:54:12.870: INFO: stdout: "deployment.apps/agnhost-master created\n" May 6 17:54:12.870: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 6 17:54:12.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1838' May 6 17:54:13.889: INFO: stderr: "" May 6 17:54:13.889: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 6 17:54:13.889: INFO: Waiting for all frontend pods to be Running. May 6 17:54:23.940: INFO: Waiting for frontend to serve content. May 6 17:54:24.419: INFO: Trying to add a new entry to the guestbook. May 6 17:54:24.465: INFO: Verifying that added entry can be retrieved. May 6 17:54:24.523: INFO: Failed to get response from guestbook. err: , response: {"data":""} STEP: using delete to clean up resources May 6 17:54:29.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1838' May 6 17:54:30.383: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 17:54:30.383: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 6 17:54:30.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1838' May 6 17:54:31.303: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 17:54:31.303: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 6 17:54:31.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1838' May 6 17:54:31.718: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 17:54:31.718: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 6 17:54:31.718: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1838' May 6 17:54:32.155: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 17:54:32.155: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 6 17:54:32.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1838' May 6 17:54:33.090: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 17:54:33.090: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 6 17:54:33.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1838' May 6 17:54:33.892: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 17:54:33.892: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 6 17:54:33.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1838" for this suite. • [SLOW TEST:35.932 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":61,"skipped":1093,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 6 17:54:34.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 6 17:54:38.351: INFO: (0) /api/v1/nodes/kali-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2083.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2083.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  6 17:54:54.620: INFO: DNS probes using dns-2083/dns-test-2a67111e-ddd5-445c-b4cb-9658b39f6957 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 17:54:54.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2083" for this suite.

• [SLOW TEST:14.909 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":275,"completed":63,"skipped":1121,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 17:54:54.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-2b471ce9-4219-4110-94c1-b8bc772345ae
STEP: Creating a pod to test consume secrets
May  6 17:54:55.279: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dd10869e-f259-4ae7-9d4a-1f047bf74e69" in namespace "projected-4357" to be "Succeeded or Failed"
May  6 17:54:55.326: INFO: Pod "pod-projected-secrets-dd10869e-f259-4ae7-9d4a-1f047bf74e69": Phase="Pending", Reason="", readiness=false. Elapsed: 47.204596ms
May  6 17:54:57.373: INFO: Pod "pod-projected-secrets-dd10869e-f259-4ae7-9d4a-1f047bf74e69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093720528s
May  6 17:54:59.624: INFO: Pod "pod-projected-secrets-dd10869e-f259-4ae7-9d4a-1f047bf74e69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.345042357s
May  6 17:55:01.942: INFO: Pod "pod-projected-secrets-dd10869e-f259-4ae7-9d4a-1f047bf74e69": Phase="Pending", Reason="", readiness=false. Elapsed: 6.662549383s
May  6 17:55:03.946: INFO: Pod "pod-projected-secrets-dd10869e-f259-4ae7-9d4a-1f047bf74e69": Phase="Running", Reason="", readiness=true. Elapsed: 8.666564365s
May  6 17:55:05.950: INFO: Pod "pod-projected-secrets-dd10869e-f259-4ae7-9d4a-1f047bf74e69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.670511525s
STEP: Saw pod success
May  6 17:55:05.950: INFO: Pod "pod-projected-secrets-dd10869e-f259-4ae7-9d4a-1f047bf74e69" satisfied condition "Succeeded or Failed"
May  6 17:55:05.952: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-dd10869e-f259-4ae7-9d4a-1f047bf74e69 container projected-secret-volume-test: 
STEP: delete the pod
May  6 17:55:06.027: INFO: Waiting for pod pod-projected-secrets-dd10869e-f259-4ae7-9d4a-1f047bf74e69 to disappear
May  6 17:55:06.037: INFO: Pod pod-projected-secrets-dd10869e-f259-4ae7-9d4a-1f047bf74e69 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 17:55:06.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4357" for this suite.

• [SLOW TEST:11.317 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":1139,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 17:55:06.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
May  6 17:55:18.710: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  6 17:55:18.727: INFO: Pod pod-with-poststart-http-hook still exists
May  6 17:55:20.727: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  6 17:55:20.732: INFO: Pod pod-with-poststart-http-hook still exists
May  6 17:55:22.727: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  6 17:55:22.731: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 17:55:22.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1215" for this suite.

• [SLOW TEST:16.694 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":1162,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 17:55:22.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
May  6 17:55:22.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-996'
May  6 17:55:23.567: INFO: stderr: ""
May  6 17:55:23.567: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May  6 17:55:23.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-996'
May  6 17:55:23.664: INFO: stderr: ""
May  6 17:55:23.664: INFO: stdout: "update-demo-nautilus-24z52 update-demo-nautilus-wb9hp "
May  6 17:55:23.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24z52 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-996'
May  6 17:55:23.787: INFO: stderr: ""
May  6 17:55:23.787: INFO: stdout: ""
May  6 17:55:23.787: INFO: update-demo-nautilus-24z52 is created but not running
May  6 17:55:28.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-996'
May  6 17:55:29.046: INFO: stderr: ""
May  6 17:55:29.046: INFO: stdout: "update-demo-nautilus-24z52 update-demo-nautilus-wb9hp "
May  6 17:55:29.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24z52 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-996'
May  6 17:55:29.267: INFO: stderr: ""
May  6 17:55:29.267: INFO: stdout: ""
May  6 17:55:29.267: INFO: update-demo-nautilus-24z52 is created but not running
May  6 17:55:34.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-996'
May  6 17:55:34.371: INFO: stderr: ""
May  6 17:55:34.371: INFO: stdout: "update-demo-nautilus-24z52 update-demo-nautilus-wb9hp "
May  6 17:55:34.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24z52 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-996'
May  6 17:55:34.470: INFO: stderr: ""
May  6 17:55:34.470: INFO: stdout: "true"
May  6 17:55:34.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24z52 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-996'
May  6 17:55:34.562: INFO: stderr: ""
May  6 17:55:34.562: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  6 17:55:34.562: INFO: validating pod update-demo-nautilus-24z52
May  6 17:55:34.567: INFO: got data: {
  "image": "nautilus.jpg"
}

May  6 17:55:34.567: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  6 17:55:34.567: INFO: update-demo-nautilus-24z52 is verified up and running
May  6 17:55:34.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wb9hp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-996'
May  6 17:55:34.666: INFO: stderr: ""
May  6 17:55:34.666: INFO: stdout: "true"
May  6 17:55:34.666: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wb9hp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-996'
May  6 17:55:34.750: INFO: stderr: ""
May  6 17:55:34.750: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  6 17:55:34.750: INFO: validating pod update-demo-nautilus-wb9hp
May  6 17:55:34.754: INFO: got data: {
  "image": "nautilus.jpg"
}

May  6 17:55:34.754: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  6 17:55:34.754: INFO: update-demo-nautilus-wb9hp is verified up and running
STEP: using delete to clean up resources
May  6 17:55:34.754: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-996'
May  6 17:55:34.914: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  6 17:55:34.914: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
May  6 17:55:34.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-996'
May  6 17:55:35.009: INFO: stderr: "No resources found in kubectl-996 namespace.\n"
May  6 17:55:35.009: INFO: stdout: ""
May  6 17:55:35.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-996 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May  6 17:55:35.097: INFO: stderr: ""
May  6 17:55:35.097: INFO: stdout: "update-demo-nautilus-24z52\nupdate-demo-nautilus-wb9hp\n"
May  6 17:55:35.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-996'
May  6 17:55:35.867: INFO: stderr: "No resources found in kubectl-996 namespace.\n"
May  6 17:55:35.867: INFO: stdout: ""
May  6 17:55:35.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-996 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May  6 17:55:36.010: INFO: stderr: ""
May  6 17:55:36.010: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 17:55:36.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-996" for this suite.

• [SLOW TEST:13.287 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":275,"completed":66,"skipped":1186,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 17:55:36.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  6 17:55:37.357: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  6 17:55:39.377: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384537, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384537, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384537, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384537, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 17:55:41.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384537, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384537, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384537, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384537, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  6 17:55:44.410: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 17:55:44.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6012-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 17:55:45.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7167" for this suite.
STEP: Destroying namespace "webhook-7167-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.720 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":67,"skipped":1234,"failed":0}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 17:55:45.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-4279
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
May  6 17:55:45.851: INFO: Found 0 stateful pods, waiting for 3
May  6 17:55:55.860: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May  6 17:55:55.860: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May  6 17:55:55.860: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
May  6 17:56:05.855: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May  6 17:56:05.855: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May  6 17:56:05.855: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
May  6 17:56:05.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4279 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  6 17:56:07.057: INFO: stderr: "I0506 17:56:06.722969    2006 log.go:172] (0xc0003cbb80) (0xc0006dd5e0) Create stream\nI0506 17:56:06.723042    2006 log.go:172] (0xc0003cbb80) (0xc0006dd5e0) Stream added, broadcasting: 1\nI0506 17:56:06.726280    2006 log.go:172] (0xc0003cbb80) Reply frame received for 1\nI0506 17:56:06.726328    2006 log.go:172] (0xc0003cbb80) (0xc000452000) Create stream\nI0506 17:56:06.726341    2006 log.go:172] (0xc0003cbb80) (0xc000452000) Stream added, broadcasting: 3\nI0506 17:56:06.727387    2006 log.go:172] (0xc0003cbb80) Reply frame received for 3\nI0506 17:56:06.727422    2006 log.go:172] (0xc0003cbb80) (0xc0006dd680) Create stream\nI0506 17:56:06.727436    2006 log.go:172] (0xc0003cbb80) (0xc0006dd680) Stream added, broadcasting: 5\nI0506 17:56:06.728452    2006 log.go:172] (0xc0003cbb80) Reply frame received for 5\nI0506 17:56:06.785052    2006 log.go:172] (0xc0003cbb80) Data frame received for 5\nI0506 17:56:06.785074    2006 log.go:172] (0xc0006dd680) (5) Data frame handling\nI0506 17:56:06.785086    2006 log.go:172] (0xc0006dd680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 17:56:07.050355    2006 log.go:172] (0xc0003cbb80) Data frame received for 5\nI0506 17:56:07.050406    2006 log.go:172] (0xc0006dd680) (5) Data frame handling\nI0506 17:56:07.050428    2006 log.go:172] (0xc0003cbb80) Data frame received for 3\nI0506 17:56:07.050436    2006 log.go:172] (0xc000452000) (3) Data frame handling\nI0506 17:56:07.050445    2006 log.go:172] (0xc000452000) (3) Data frame sent\nI0506 17:56:07.050456    2006 log.go:172] (0xc0003cbb80) Data frame received for 3\nI0506 17:56:07.050466    2006 log.go:172] (0xc000452000) (3) Data frame handling\nI0506 17:56:07.052096    2006 log.go:172] (0xc0003cbb80) Data frame received for 1\nI0506 17:56:07.052121    2006 log.go:172] (0xc0006dd5e0) (1) Data frame handling\nI0506 17:56:07.052148    2006 log.go:172] (0xc0006dd5e0) (1) Data frame sent\nI0506 17:56:07.052164    2006 log.go:172] (0xc0003cbb80) (0xc0006dd5e0) Stream removed, broadcasting: 1\nI0506 17:56:07.052182    2006 log.go:172] (0xc0003cbb80) Go away received\nI0506 17:56:07.052545    2006 log.go:172] (0xc0003cbb80) (0xc0006dd5e0) Stream removed, broadcasting: 1\nI0506 17:56:07.052558    2006 log.go:172] (0xc0003cbb80) (0xc000452000) Stream removed, broadcasting: 3\nI0506 17:56:07.052564    2006 log.go:172] (0xc0003cbb80) (0xc0006dd680) Stream removed, broadcasting: 5\n"
May  6 17:56:07.057: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  6 17:56:07.057: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
May  6 17:56:17.092: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
May  6 17:56:27.147: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4279 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  6 17:56:27.500: INFO: stderr: "I0506 17:56:27.376594    2027 log.go:172] (0xc000a7f080) (0xc000af45a0) Create stream\nI0506 17:56:27.376785    2027 log.go:172] (0xc000a7f080) (0xc000af45a0) Stream added, broadcasting: 1\nI0506 17:56:27.382180    2027 log.go:172] (0xc000a7f080) Reply frame received for 1\nI0506 17:56:27.382231    2027 log.go:172] (0xc000a7f080) (0xc000af4640) Create stream\nI0506 17:56:27.382254    2027 log.go:172] (0xc000a7f080) (0xc000af4640) Stream added, broadcasting: 3\nI0506 17:56:27.382970    2027 log.go:172] (0xc000a7f080) Reply frame received for 3\nI0506 17:56:27.382999    2027 log.go:172] (0xc000a7f080) (0xc000a823c0) Create stream\nI0506 17:56:27.383006    2027 log.go:172] (0xc000a7f080) (0xc000a823c0) Stream added, broadcasting: 5\nI0506 17:56:27.383778    2027 log.go:172] (0xc000a7f080) Reply frame received for 5\nI0506 17:56:27.495329    2027 log.go:172] (0xc000a7f080) Data frame received for 3\nI0506 17:56:27.495389    2027 log.go:172] (0xc000af4640) (3) Data frame handling\nI0506 17:56:27.495413    2027 log.go:172] (0xc000af4640) (3) Data frame sent\nI0506 17:56:27.495435    2027 log.go:172] (0xc000a7f080) Data frame received for 3\nI0506 17:56:27.495451    2027 log.go:172] (0xc000af4640) (3) Data frame handling\nI0506 17:56:27.495475    2027 log.go:172] (0xc000a7f080) Data frame received for 5\nI0506 17:56:27.495493    2027 log.go:172] (0xc000a823c0) (5) Data frame handling\nI0506 17:56:27.495512    2027 log.go:172] (0xc000a823c0) (5) Data frame sent\nI0506 17:56:27.495529    2027 log.go:172] (0xc000a7f080) Data frame received for 5\nI0506 17:56:27.495557    2027 log.go:172] (0xc000a823c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 17:56:27.496668    2027 log.go:172] (0xc000a7f080) Data frame received for 1\nI0506 17:56:27.496688    2027 log.go:172] (0xc000af45a0) (1) Data frame handling\nI0506 17:56:27.496698    2027 log.go:172] (0xc000af45a0) (1) Data frame sent\nI0506 17:56:27.496773    2027 log.go:172] (0xc000a7f080) (0xc000af45a0) Stream removed, broadcasting: 1\nI0506 17:56:27.497028    2027 log.go:172] (0xc000a7f080) (0xc000af45a0) Stream removed, broadcasting: 1\nI0506 17:56:27.497043    2027 log.go:172] (0xc000a7f080) (0xc000af4640) Stream removed, broadcasting: 3\nI0506 17:56:27.497052    2027 log.go:172] (0xc000a7f080) (0xc000a823c0) Stream removed, broadcasting: 5\n"
May  6 17:56:27.500: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  6 17:56:27.500: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  6 17:56:37.721: INFO: Waiting for StatefulSet statefulset-4279/ss2 to complete update
May  6 17:56:37.721: INFO: Waiting for Pod statefulset-4279/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  6 17:56:37.721: INFO: Waiting for Pod statefulset-4279/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  6 17:56:47.730: INFO: Waiting for StatefulSet statefulset-4279/ss2 to complete update
May  6 17:56:47.730: INFO: Waiting for Pod statefulset-4279/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  6 17:56:47.730: INFO: Waiting for Pod statefulset-4279/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  6 17:56:57.728: INFO: Waiting for StatefulSet statefulset-4279/ss2 to complete update
May  6 17:56:57.728: INFO: Waiting for Pod statefulset-4279/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  6 17:57:07.742: INFO: Waiting for StatefulSet statefulset-4279/ss2 to complete update
STEP: Rolling back to a previous revision
May  6 17:57:17.993: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4279 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  6 17:57:18.912: INFO: stderr: "I0506 17:57:18.218270    2047 log.go:172] (0xc000a040b0) (0xc000ba6140) Create stream\nI0506 17:57:18.218387    2047 log.go:172] (0xc000a040b0) (0xc000ba6140) Stream added, broadcasting: 1\nI0506 17:57:18.220004    2047 log.go:172] (0xc000a040b0) Reply frame received for 1\nI0506 17:57:18.220037    2047 log.go:172] (0xc000a040b0) (0xc00090ae60) Create stream\nI0506 17:57:18.220046    2047 log.go:172] (0xc000a040b0) (0xc00090ae60) Stream added, broadcasting: 3\nI0506 17:57:18.220808    2047 log.go:172] (0xc000a040b0) Reply frame received for 3\nI0506 17:57:18.220843    2047 log.go:172] (0xc000a040b0) (0xc000ba61e0) Create stream\nI0506 17:57:18.220859    2047 log.go:172] (0xc000a040b0) (0xc000ba61e0) Stream added, broadcasting: 5\nI0506 17:57:18.221839    2047 log.go:172] (0xc000a040b0) Reply frame received for 5\nI0506 17:57:18.276155    2047 log.go:172] (0xc000a040b0) Data frame received for 5\nI0506 17:57:18.276181    2047 log.go:172] (0xc000ba61e0) (5) Data frame handling\nI0506 17:57:18.276195    2047 log.go:172] (0xc000ba61e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 17:57:18.903273    2047 log.go:172] (0xc000a040b0) Data frame received for 3\nI0506 17:57:18.903328    2047 log.go:172] (0xc00090ae60) (3) Data frame handling\nI0506 17:57:18.903389    2047 log.go:172] (0xc00090ae60) (3) Data frame sent\nI0506 17:57:18.903430    2047 log.go:172] (0xc000a040b0) Data frame received for 3\nI0506 17:57:18.903454    2047 log.go:172] (0xc00090ae60) (3) Data frame handling\nI0506 17:57:18.903488    2047 log.go:172] (0xc000a040b0) Data frame received for 5\nI0506 17:57:18.903505    2047 log.go:172] (0xc000ba61e0) (5) Data frame handling\nI0506 17:57:18.906332    2047 log.go:172] (0xc000a040b0) Data frame received for 1\nI0506 17:57:18.906353    2047 log.go:172] (0xc000ba6140) (1) Data frame handling\nI0506 17:57:18.906364    2047 log.go:172] (0xc000ba6140) (1) Data frame sent\nI0506 17:57:18.906374    2047 log.go:172] (0xc000a040b0) (0xc000ba6140) Stream removed, broadcasting: 1\nI0506 17:57:18.906638    2047 log.go:172] (0xc000a040b0) Go away received\nI0506 17:57:18.906715    2047 log.go:172] (0xc000a040b0) (0xc000ba6140) Stream removed, broadcasting: 1\nI0506 17:57:18.906783    2047 log.go:172] (0xc000a040b0) (0xc00090ae60) Stream removed, broadcasting: 3\nI0506 17:57:18.906816    2047 log.go:172] (0xc000a040b0) (0xc000ba61e0) Stream removed, broadcasting: 5\n"
May  6 17:57:18.912: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  6 17:57:18.912: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  6 17:57:28.968: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
May  6 17:57:39.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4279 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  6 17:57:39.226: INFO: stderr: "I0506 17:57:39.138377    2069 log.go:172] (0xc0009f0000) (0xc00091a000) Create stream\nI0506 17:57:39.138429    2069 log.go:172] (0xc0009f0000) (0xc00091a000) Stream added, broadcasting: 1\nI0506 17:57:39.140746    2069 log.go:172] (0xc0009f0000) Reply frame received for 1\nI0506 17:57:39.140782    2069 log.go:172] (0xc0009f0000) (0xc0003b6aa0) Create stream\nI0506 17:57:39.140794    2069 log.go:172] (0xc0009f0000) (0xc0003b6aa0) Stream added, broadcasting: 3\nI0506 17:57:39.142012    2069 log.go:172] (0xc0009f0000) Reply frame received for 3\nI0506 17:57:39.142051    2069 log.go:172] (0xc0009f0000) (0xc0009da000) Create stream\nI0506 17:57:39.142064    2069 log.go:172] (0xc0009f0000) (0xc0009da000) Stream added, broadcasting: 5\nI0506 17:57:39.142941    2069 log.go:172] (0xc0009f0000) Reply frame received for 5\nI0506 17:57:39.222257    2069 log.go:172] (0xc0009f0000) Data frame received for 3\nI0506 17:57:39.222281    2069 log.go:172] (0xc0003b6aa0) (3) Data frame handling\nI0506 17:57:39.222311    2069 log.go:172] (0xc0009f0000) Data frame received for 5\nI0506 17:57:39.222340    2069 log.go:172] (0xc0009da000) (5) Data frame handling\nI0506 17:57:39.222350    2069 log.go:172] (0xc0009da000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 17:57:39.222359    2069 log.go:172] (0xc0009f0000) Data frame received for 5\nI0506 17:57:39.222367    2069 log.go:172] (0xc0009da000) (5) Data frame handling\nI0506 17:57:39.222385    2069 log.go:172] (0xc0003b6aa0) (3) Data frame sent\nI0506 17:57:39.222393    2069 log.go:172] (0xc0009f0000) Data frame received for 3\nI0506 17:57:39.222404    2069 log.go:172] (0xc0003b6aa0) (3) Data frame handling\nI0506 17:57:39.223330    2069 log.go:172] (0xc0009f0000) Data frame received for 1\nI0506 17:57:39.223345    2069 log.go:172] (0xc00091a000) (1) Data frame handling\nI0506 17:57:39.223354    2069 log.go:172] (0xc00091a000) (1) Data frame sent\nI0506 17:57:39.223364    2069 log.go:172] (0xc0009f0000) (0xc00091a000) Stream removed, broadcasting: 1\nI0506 17:57:39.223374    2069 log.go:172] (0xc0009f0000) Go away received\nI0506 17:57:39.223644    2069 log.go:172] (0xc0009f0000) (0xc00091a000) Stream removed, broadcasting: 1\nI0506 17:57:39.223658    2069 log.go:172] (0xc0009f0000) (0xc0003b6aa0) Stream removed, broadcasting: 3\nI0506 17:57:39.223665    2069 log.go:172] (0xc0009f0000) (0xc0009da000) Stream removed, broadcasting: 5\n"
May  6 17:57:39.226: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  6 17:57:39.226: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  6 17:57:49.242: INFO: Waiting for StatefulSet statefulset-4279/ss2 to complete update
May  6 17:57:49.242: INFO: Waiting for Pod statefulset-4279/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May  6 17:57:49.242: INFO: Waiting for Pod statefulset-4279/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May  6 17:57:49.242: INFO: Waiting for Pod statefulset-4279/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May  6 17:58:00.070: INFO: Waiting for StatefulSet statefulset-4279/ss2 to complete update
May  6 17:58:00.070: INFO: Waiting for Pod statefulset-4279/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May  6 17:58:00.070: INFO: Waiting for Pod statefulset-4279/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May  6 17:58:09.252: INFO: Waiting for StatefulSet statefulset-4279/ss2 to complete update
May  6 17:58:09.252: INFO: Waiting for Pod statefulset-4279/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May  6 17:58:09.252: INFO: Waiting for Pod statefulset-4279/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May  6 17:58:19.400: INFO: Waiting for StatefulSet statefulset-4279/ss2 to complete update
May  6 17:58:19.400: INFO: Waiting for Pod statefulset-4279/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May  6 17:58:29.251: INFO: Deleting all statefulset in ns statefulset-4279
May  6 17:58:29.253: INFO: Scaling statefulset ss2 to 0
May  6 17:59:09.376: INFO: Waiting for statefulset status.replicas updated to 0
May  6 17:59:09.379: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 17:59:09.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4279" for this suite.

• [SLOW TEST:203.873 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":68,"skipped":1238,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 17:59:09.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-cc1be1a3-ac83-4380-9114-0a219d0f72c2
STEP: Creating a pod to test consume configMaps
May  6 17:59:10.372: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-af595f82-ca74-43bc-a229-04422a497bc5" in namespace "projected-2291" to be "Succeeded or Failed"
May  6 17:59:10.824: INFO: Pod "pod-projected-configmaps-af595f82-ca74-43bc-a229-04422a497bc5": Phase="Pending", Reason="", readiness=false. Elapsed: 452.167281ms
May  6 17:59:12.829: INFO: Pod "pod-projected-configmaps-af595f82-ca74-43bc-a229-04422a497bc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.457255624s
May  6 17:59:14.889: INFO: Pod "pod-projected-configmaps-af595f82-ca74-43bc-a229-04422a497bc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.517166891s
STEP: Saw pod success
May  6 17:59:14.889: INFO: Pod "pod-projected-configmaps-af595f82-ca74-43bc-a229-04422a497bc5" satisfied condition "Succeeded or Failed"
May  6 17:59:14.973: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-af595f82-ca74-43bc-a229-04422a497bc5 container projected-configmap-volume-test: 
STEP: delete the pod
May  6 17:59:15.153: INFO: Waiting for pod pod-projected-configmaps-af595f82-ca74-43bc-a229-04422a497bc5 to disappear
May  6 17:59:15.159: INFO: Pod pod-projected-configmaps-af595f82-ca74-43bc-a229-04422a497bc5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 17:59:15.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2291" for this suite.

• [SLOW TEST:5.550 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1250,"failed":0}
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 17:59:15.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 17:59:15.377: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-a48361ec-4f8e-4777-b35c-7479ed5d01c1" in namespace "security-context-test-6523" to be "Succeeded or Failed"
May  6 17:59:15.416: INFO: Pod "busybox-privileged-false-a48361ec-4f8e-4777-b35c-7479ed5d01c1": Phase="Pending", Reason="", readiness=false. Elapsed: 39.262404ms
May  6 17:59:17.531: INFO: Pod "busybox-privileged-false-a48361ec-4f8e-4777-b35c-7479ed5d01c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154138251s
May  6 17:59:19.534: INFO: Pod "busybox-privileged-false-a48361ec-4f8e-4777-b35c-7479ed5d01c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157018082s
May  6 17:59:21.614: INFO: Pod "busybox-privileged-false-a48361ec-4f8e-4777-b35c-7479ed5d01c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.237059716s
May  6 17:59:21.614: INFO: Pod "busybox-privileged-false-a48361ec-4f8e-4777-b35c-7479ed5d01c1" satisfied condition "Succeeded or Failed"
May  6 17:59:21.667: INFO: Got logs for pod "busybox-privileged-false-a48361ec-4f8e-4777-b35c-7479ed5d01c1": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 17:59:21.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6523" for this suite.

• [SLOW TEST:6.537 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1250,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 17:59:21.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 17:59:21.924: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
May  6 17:59:24.507: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 17:59:26.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8056" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":71,"skipped":1273,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 17:59:26.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 17:59:27.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May  6 17:59:30.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2213 create -f -'
May  6 17:59:36.909: INFO: stderr: ""
May  6 17:59:36.909: INFO: stdout: "e2e-test-crd-publish-openapi-7972-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
May  6 17:59:36.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2213 delete e2e-test-crd-publish-openapi-7972-crds test-cr'
May  6 17:59:37.009: INFO: stderr: ""
May  6 17:59:37.009: INFO: stdout: "e2e-test-crd-publish-openapi-7972-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
May  6 17:59:37.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2213 apply -f -'
May  6 17:59:37.271: INFO: stderr: ""
May  6 17:59:37.271: INFO: stdout: "e2e-test-crd-publish-openapi-7972-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
May  6 17:59:37.271: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2213 delete e2e-test-crd-publish-openapi-7972-crds test-cr'
May  6 17:59:37.854: INFO: stderr: ""
May  6 17:59:37.854: INFO: stdout: "e2e-test-crd-publish-openapi-7972-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
May  6 17:59:37.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7972-crds'
May  6 17:59:38.918: INFO: stderr: ""
May  6 17:59:38.918: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7972-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 17:59:42.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2213" for this suite.

• [SLOW TEST:16.074 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":72,"skipped":1283,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 17:59:42.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  6 17:59:43.415: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  6 17:59:45.483: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384783, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384783, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384783, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384783, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 17:59:47.489: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384783, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384783, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384783, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384783, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  6 17:59:50.615: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 17:59:53.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8398" for this suite.
STEP: Destroying namespace "webhook-8398-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.409 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":73,"skipped":1294,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 17:59:53.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with configMap that has name projected-configmap-test-upd-78762020-57ef-4ac6-bb03-d786b4e0801a
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-78762020-57ef-4ac6-bb03-d786b4e0801a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:00:03.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2727" for this suite.

• [SLOW TEST:9.213 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1300,"failed":0}
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:00:03.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-225529f0-9a0d-4dca-a72c-45468234c6f2
STEP: Creating a pod to test consume configMaps
May  6 18:00:04.079: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b5f1d78f-045e-4973-9fc5-e43785212e7a" in namespace "projected-9545" to be "Succeeded or Failed"
May  6 18:00:04.158: INFO: Pod "pod-projected-configmaps-b5f1d78f-045e-4973-9fc5-e43785212e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 78.977333ms
May  6 18:00:06.162: INFO: Pod "pod-projected-configmaps-b5f1d78f-045e-4973-9fc5-e43785212e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083092657s
May  6 18:00:08.238: INFO: Pod "pod-projected-configmaps-b5f1d78f-045e-4973-9fc5-e43785212e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158585147s
May  6 18:00:10.242: INFO: Pod "pod-projected-configmaps-b5f1d78f-045e-4973-9fc5-e43785212e7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.162796458s
STEP: Saw pod success
May  6 18:00:10.242: INFO: Pod "pod-projected-configmaps-b5f1d78f-045e-4973-9fc5-e43785212e7a" satisfied condition "Succeeded or Failed"
May  6 18:00:10.246: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-b5f1d78f-045e-4973-9fc5-e43785212e7a container projected-configmap-volume-test: 
STEP: delete the pod
May  6 18:00:10.418: INFO: Waiting for pod pod-projected-configmaps-b5f1d78f-045e-4973-9fc5-e43785212e7a to disappear
May  6 18:00:10.433: INFO: Pod pod-projected-configmaps-b5f1d78f-045e-4973-9fc5-e43785212e7a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:00:10.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9545" for this suite.

• [SLOW TEST:7.403 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1300,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:00:10.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:00:10.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6874" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":275,"completed":76,"skipped":1352,"failed":0}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:00:10.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:00:10.870: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
May  6 18:00:10.914: INFO: Number of nodes with available pods: 0
May  6 18:00:10.914: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
May  6 18:00:10.974: INFO: Number of nodes with available pods: 0
May  6 18:00:10.974: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:11.978: INFO: Number of nodes with available pods: 0
May  6 18:00:11.978: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:12.978: INFO: Number of nodes with available pods: 0
May  6 18:00:12.978: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:14.143: INFO: Number of nodes with available pods: 0
May  6 18:00:14.143: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:15.564: INFO: Number of nodes with available pods: 0
May  6 18:00:15.564: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:15.977: INFO: Number of nodes with available pods: 0
May  6 18:00:15.977: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:17.004: INFO: Number of nodes with available pods: 1
May  6 18:00:17.004: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
May  6 18:00:17.055: INFO: Number of nodes with available pods: 1
May  6 18:00:17.055: INFO: Number of running nodes: 0, number of available pods: 1
May  6 18:00:18.070: INFO: Number of nodes with available pods: 0
May  6 18:00:18.070: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
May  6 18:00:18.406: INFO: Number of nodes with available pods: 0
May  6 18:00:18.407: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:19.486: INFO: Number of nodes with available pods: 0
May  6 18:00:19.486: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:20.411: INFO: Number of nodes with available pods: 0
May  6 18:00:20.411: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:21.411: INFO: Number of nodes with available pods: 0
May  6 18:00:21.411: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:22.411: INFO: Number of nodes with available pods: 0
May  6 18:00:22.411: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:23.527: INFO: Number of nodes with available pods: 0
May  6 18:00:23.527: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:24.411: INFO: Number of nodes with available pods: 0
May  6 18:00:24.412: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:25.411: INFO: Number of nodes with available pods: 0
May  6 18:00:25.411: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:26.627: INFO: Number of nodes with available pods: 0
May  6 18:00:26.627: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:27.411: INFO: Number of nodes with available pods: 0
May  6 18:00:27.411: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:28.424: INFO: Number of nodes with available pods: 0
May  6 18:00:28.424: INFO: Node kali-worker is running more than one daemon pod
May  6 18:00:29.411: INFO: Number of nodes with available pods: 1
May  6 18:00:29.411: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3612, will wait for the garbage collector to delete the pods
May  6 18:00:29.477: INFO: Deleting DaemonSet.extensions daemon-set took: 6.985971ms
May  6 18:00:29.777: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.230953ms
May  6 18:00:34.081: INFO: Number of nodes with available pods: 0
May  6 18:00:34.081: INFO: Number of running nodes: 0, number of available pods: 0
May  6 18:00:34.084: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3612/daemonsets","resourceVersion":"2055757"},"items":null}

May  6 18:00:34.086: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3612/pods","resourceVersion":"2055757"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:00:34.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3612" for this suite.

• [SLOW TEST:23.561 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":77,"skipped":1354,"failed":0}
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:00:34.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
May  6 18:00:34.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3996'
May  6 18:00:34.620: INFO: stderr: ""
May  6 18:00:34.620: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
May  6 18:00:35.624: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:00:35.624: INFO: Found 0 / 1
May  6 18:00:36.625: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:00:36.625: INFO: Found 0 / 1
May  6 18:00:37.630: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:00:37.630: INFO: Found 0 / 1
May  6 18:00:38.627: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:00:38.627: INFO: Found 0 / 1
May  6 18:00:39.796: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:00:39.796: INFO: Found 0 / 1
May  6 18:00:40.789: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:00:40.789: INFO: Found 0 / 1
May  6 18:00:42.076: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:00:42.076: INFO: Found 0 / 1
May  6 18:00:43.067: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:00:43.067: INFO: Found 0 / 1
May  6 18:00:44.079: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:00:44.079: INFO: Found 1 / 1
May  6 18:00:44.079: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
May  6 18:00:44.167: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:00:44.167: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May  6 18:00:44.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config patch pod agnhost-master-rx8st --namespace=kubectl-3996 -p {"metadata":{"annotations":{"x":"y"}}}'
May  6 18:00:44.877: INFO: stderr: ""
May  6 18:00:44.877: INFO: stdout: "pod/agnhost-master-rx8st patched\n"
STEP: checking annotations
May  6 18:00:45.160: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:00:45.160: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:00:45.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3996" for this suite.

• [SLOW TEST:10.963 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":275,"completed":78,"skipped":1354,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:00:45.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  6 18:00:47.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9675033f-b5a9-439a-9b7a-44b492e419cc" in namespace "downward-api-3474" to be "Succeeded or Failed"
May  6 18:00:47.958: INFO: Pod "downwardapi-volume-9675033f-b5a9-439a-9b7a-44b492e419cc": Phase="Pending", Reason="", readiness=false. Elapsed: 888.594181ms
May  6 18:00:50.136: INFO: Pod "downwardapi-volume-9675033f-b5a9-439a-9b7a-44b492e419cc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.067108032s
May  6 18:00:52.424: INFO: Pod "downwardapi-volume-9675033f-b5a9-439a-9b7a-44b492e419cc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.354923079s
May  6 18:00:54.454: INFO: Pod "downwardapi-volume-9675033f-b5a9-439a-9b7a-44b492e419cc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.384963906s
May  6 18:00:56.459: INFO: Pod "downwardapi-volume-9675033f-b5a9-439a-9b7a-44b492e419cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.38968625s
STEP: Saw pod success
May  6 18:00:56.459: INFO: Pod "downwardapi-volume-9675033f-b5a9-439a-9b7a-44b492e419cc" satisfied condition "Succeeded or Failed"
May  6 18:00:56.462: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-9675033f-b5a9-439a-9b7a-44b492e419cc container client-container: 
STEP: delete the pod
May  6 18:00:56.483: INFO: Waiting for pod downwardapi-volume-9675033f-b5a9-439a-9b7a-44b492e419cc to disappear
May  6 18:00:56.500: INFO: Pod downwardapi-volume-9675033f-b5a9-439a-9b7a-44b492e419cc no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:00:56.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3474" for this suite.

• [SLOW TEST:11.339 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1370,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:00:56.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:00:56.586: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:00:57.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5654" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":80,"skipped":1390,"failed":0}

------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:00:57.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:00:57.291: INFO: (0) /api/v1/nodes/kali-worker2:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name projected-secret-test-1f6dd9e0-7dab-4044-98c1-1c7d737a5985
STEP: Creating a pod to test consume secrets
May  6 18:00:57.605: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dda294ee-e668-4709-99ef-e71864e45bf9" in namespace "projected-8192" to be "Succeeded or Failed"
May  6 18:00:57.844: INFO: Pod "pod-projected-secrets-dda294ee-e668-4709-99ef-e71864e45bf9": Phase="Pending", Reason="", readiness=false. Elapsed: 238.846251ms
May  6 18:00:59.848: INFO: Pod "pod-projected-secrets-dda294ee-e668-4709-99ef-e71864e45bf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242875811s
May  6 18:01:01.952: INFO: Pod "pod-projected-secrets-dda294ee-e668-4709-99ef-e71864e45bf9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346897963s
May  6 18:01:03.993: INFO: Pod "pod-projected-secrets-dda294ee-e668-4709-99ef-e71864e45bf9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.3885391s
May  6 18:01:05.998: INFO: Pod "pod-projected-secrets-dda294ee-e668-4709-99ef-e71864e45bf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.39288781s
STEP: Saw pod success
May  6 18:01:05.998: INFO: Pod "pod-projected-secrets-dda294ee-e668-4709-99ef-e71864e45bf9" satisfied condition "Succeeded or Failed"
May  6 18:01:06.000: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-dda294ee-e668-4709-99ef-e71864e45bf9 container secret-volume-test: 
STEP: delete the pod
May  6 18:01:06.073: INFO: Waiting for pod pod-projected-secrets-dda294ee-e668-4709-99ef-e71864e45bf9 to disappear
May  6 18:01:06.124: INFO: Pod pod-projected-secrets-dda294ee-e668-4709-99ef-e71864e45bf9 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:01:06.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8192" for this suite.

• [SLOW TEST:8.735 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1397,"failed":0}
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:01:06.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:01:10.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4048" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1397,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:01:10.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:01:21.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8966" for this suite.

• [SLOW TEST:11.600 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":84,"skipped":1420,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:01:21.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
May  6 18:01:22.009: INFO: Waiting up to 5m0s for pod "pod-c92a309a-ee88-4860-88c9-1a6d29647ac3" in namespace "emptydir-990" to be "Succeeded or Failed"
May  6 18:01:22.018: INFO: Pod "pod-c92a309a-ee88-4860-88c9-1a6d29647ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.797069ms
May  6 18:01:24.023: INFO: Pod "pod-c92a309a-ee88-4860-88c9-1a6d29647ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013966389s
May  6 18:01:26.027: INFO: Pod "pod-c92a309a-ee88-4860-88c9-1a6d29647ac3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018814754s
STEP: Saw pod success
May  6 18:01:26.027: INFO: Pod "pod-c92a309a-ee88-4860-88c9-1a6d29647ac3" satisfied condition "Succeeded or Failed"
May  6 18:01:26.031: INFO: Trying to get logs from node kali-worker2 pod pod-c92a309a-ee88-4860-88c9-1a6d29647ac3 container test-container: 
STEP: delete the pod
May  6 18:01:26.070: INFO: Waiting for pod pod-c92a309a-ee88-4860-88c9-1a6d29647ac3 to disappear
May  6 18:01:26.082: INFO: Pod pod-c92a309a-ee88-4860-88c9-1a6d29647ac3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:01:26.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-990" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1438,"failed":0}
SSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:01:26.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:01:26.651: INFO: Creating deployment "test-recreate-deployment"
May  6 18:01:26.681: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
May  6 18:01:26.704: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
May  6 18:01:28.726: INFO: Waiting deployment "test-recreate-deployment" to complete
May  6 18:01:28.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384886, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384886, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384886, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384886, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:01:30.733: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
May  6 18:01:30.742: INFO: Updating deployment test-recreate-deployment
May  6 18:01:30.742: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May  6 18:01:31.341: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-66 /apis/apps/v1/namespaces/deployment-66/deployments/test-recreate-deployment def57349-0e1e-4674-8910-f8e35a4221df 2056133 2 2020-05-06 18:01:26 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-06 18:01:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-06 18:01:31 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002489358  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-06 18:01:30 +0000 UTC,LastTransitionTime:2020-05-06 18:01:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-06 18:01:31 +0000 UTC,LastTransitionTime:2020-05-06 18:01:26 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

May  6 18:01:31.355: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-66 /apis/apps/v1/namespaces/deployment-66/replicasets/test-recreate-deployment-d5667d9c7 95758c2f-7031-4ede-a3b2-b2cb6f481aeb 2056131 1 2020-05-06 18:01:30 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment def57349-0e1e-4674-8910-f8e35a4221df 0xc0023829a0 0xc0023829a1}] []  [{kube-controller-manager Update apps/v1 2020-05-06 18:01:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 101 102 53 55 51 52 57 45 48 101 49 101 45 52 54 55 52 45 56 57 49 48 45 102 56 101 51 53 97 52 50 50 49 100 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002382ce8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  6 18:01:31.355: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
May  6 18:01:31.355: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c  deployment-66 /apis/apps/v1/namespaces/deployment-66/replicasets/test-recreate-deployment-74d98b5f7c 8c485199-997a-40b0-b540-bd6a564a14d6 2056121 2 2020-05-06 18:01:26 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment def57349-0e1e-4674-8910-f8e35a4221df 0xc002382397 0xc002382398}] []  [{kube-controller-manager Update apps/v1 2020-05-06 18:01:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 101 102 53 55 51 52 57 45 48 101 49 101 45 52 54 55 52 45 56 57 49 48 45 102 56 101 51 53 97 52 50 50 49 100 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0023827f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  6 18:01:31.527: INFO: Pod "test-recreate-deployment-d5667d9c7-jbzkm" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-jbzkm test-recreate-deployment-d5667d9c7- deployment-66 /api/v1/namespaces/deployment-66/pods/test-recreate-deployment-d5667d9c7-jbzkm a9253a1a-d76a-44f9-a331-84006de5aff6 2056134 0 2020-05-06 18:01:30 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 95758c2f-7031-4ede-a3b2-b2cb6f481aeb 0xc002383d60 0xc002383d61}] []  [{kube-controller-manager Update v1 2020-05-06 18:01:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 53 55 53 56 99 50 102 45 55 48 51 49 45 52 101 100 101 45 97 51 98 50 45 98 50 99 98 54 102 52 56 49 97 101 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:01:31 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4kpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4kpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4kpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:01:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:01:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:01:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:01:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-06 18:01:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:01:31.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-66" for this suite.

• [SLOW TEST:5.165 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":86,"skipped":1444,"failed":0}
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:01:31.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:02:31.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5080" for this suite.

• [SLOW TEST:60.197 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1450,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:02:31.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  6 18:02:33.748: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  6 18:02:35.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384953, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384953, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384954, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384953, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:02:38.390: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384953, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384953, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384954, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384953, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:02:39.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384953, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384953, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384954, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384953, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:02:41.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384953, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384953, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384954, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384953, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  6 18:02:45.104: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
May  6 18:02:50.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config attach --namespace=webhook-3507 to-be-attached-pod -i -c=container1'
May  6 18:02:50.868: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:02:50.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3507" for this suite.
STEP: Destroying namespace "webhook-3507-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.235 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":88,"skipped":1466,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:02:50.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
May  6 18:02:51.081: INFO: Waiting up to 5m0s for pod "client-containers-58ae3539-21f4-43f1-a0d2-208dbecc02ca" in namespace "containers-1997" to be "Succeeded or Failed"
May  6 18:02:51.104: INFO: Pod "client-containers-58ae3539-21f4-43f1-a0d2-208dbecc02ca": Phase="Pending", Reason="", readiness=false. Elapsed: 22.22216ms
May  6 18:02:53.198: INFO: Pod "client-containers-58ae3539-21f4-43f1-a0d2-208dbecc02ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116335038s
May  6 18:02:55.251: INFO: Pod "client-containers-58ae3539-21f4-43f1-a0d2-208dbecc02ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169443349s
May  6 18:02:57.511: INFO: Pod "client-containers-58ae3539-21f4-43f1-a0d2-208dbecc02ca": Phase="Running", Reason="", readiness=true. Elapsed: 6.429596527s
May  6 18:02:59.517: INFO: Pod "client-containers-58ae3539-21f4-43f1-a0d2-208dbecc02ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.435420822s
STEP: Saw pod success
May  6 18:02:59.517: INFO: Pod "client-containers-58ae3539-21f4-43f1-a0d2-208dbecc02ca" satisfied condition "Succeeded or Failed"
May  6 18:02:59.519: INFO: Trying to get logs from node kali-worker pod client-containers-58ae3539-21f4-43f1-a0d2-208dbecc02ca container test-container: 
STEP: delete the pod
May  6 18:02:59.564: INFO: Waiting for pod client-containers-58ae3539-21f4-43f1-a0d2-208dbecc02ca to disappear
May  6 18:02:59.644: INFO: Pod client-containers-58ae3539-21f4-43f1-a0d2-208dbecc02ca no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:02:59.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1997" for this suite.

• [SLOW TEST:8.733 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1495,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:02:59.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  6 18:03:01.666: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  6 18:03:04.312: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384981, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384981, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384981, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384981, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:03:06.443: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384981, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384981, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384981, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384981, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:03:08.677: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384981, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384981, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384981, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384981, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:03:10.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384981, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384981, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384981, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724384981, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  6 18:03:13.378: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:03:14.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1044" for this suite.
STEP: Destroying namespace "webhook-1044-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:15.405 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":90,"skipped":1502,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:03:15.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-2196f97e-5878-4a05-878b-a7d0fb9f82c2
STEP: Creating a pod to test consume configMaps
May  6 18:03:16.274: INFO: Waiting up to 5m0s for pod "pod-configmaps-906ff5f4-704c-481b-9025-e446d5fb29f2" in namespace "configmap-4835" to be "Succeeded or Failed"
May  6 18:03:16.538: INFO: Pod "pod-configmaps-906ff5f4-704c-481b-9025-e446d5fb29f2": Phase="Pending", Reason="", readiness=false. Elapsed: 264.453023ms
May  6 18:03:18.543: INFO: Pod "pod-configmaps-906ff5f4-704c-481b-9025-e446d5fb29f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.269096944s
May  6 18:03:20.654: INFO: Pod "pod-configmaps-906ff5f4-704c-481b-9025-e446d5fb29f2": Phase="Running", Reason="", readiness=true. Elapsed: 4.379920291s
May  6 18:03:22.683: INFO: Pod "pod-configmaps-906ff5f4-704c-481b-9025-e446d5fb29f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.40929517s
STEP: Saw pod success
May  6 18:03:22.683: INFO: Pod "pod-configmaps-906ff5f4-704c-481b-9025-e446d5fb29f2" satisfied condition "Succeeded or Failed"
May  6 18:03:22.687: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-906ff5f4-704c-481b-9025-e446d5fb29f2 container configmap-volume-test: 
STEP: delete the pod
May  6 18:03:22.901: INFO: Waiting for pod pod-configmaps-906ff5f4-704c-481b-9025-e446d5fb29f2 to disappear
May  6 18:03:22.962: INFO: Pod pod-configmaps-906ff5f4-704c-481b-9025-e446d5fb29f2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:03:22.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4835" for this suite.

• [SLOW TEST:7.845 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1549,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:03:22.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service nodeport-test with type=NodePort in namespace services-5553
STEP: creating replication controller nodeport-test in namespace services-5553
I0506 18:03:23.473600       7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-5553, replica count: 2
I0506 18:03:26.523983       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:03:29.524224       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:03:32.524435       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  6 18:03:32.524: INFO: Creating new exec pod
May  6 18:03:39.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5553 execpodvt7l7 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
May  6 18:03:40.014: INFO: stderr: "I0506 18:03:39.780472    2261 log.go:172] (0xc0009d0b00) (0xc0009a6140) Create stream\nI0506 18:03:39.780568    2261 log.go:172] (0xc0009d0b00) (0xc0009a6140) Stream added, broadcasting: 1\nI0506 18:03:39.783636    2261 log.go:172] (0xc0009d0b00) Reply frame received for 1\nI0506 18:03:39.783716    2261 log.go:172] (0xc0009d0b00) (0xc000663220) Create stream\nI0506 18:03:39.783744    2261 log.go:172] (0xc0009d0b00) (0xc000663220) Stream added, broadcasting: 3\nI0506 18:03:39.784743    2261 log.go:172] (0xc0009d0b00) Reply frame received for 3\nI0506 18:03:39.784784    2261 log.go:172] (0xc0009d0b00) (0xc0007d40a0) Create stream\nI0506 18:03:39.784798    2261 log.go:172] (0xc0009d0b00) (0xc0007d40a0) Stream added, broadcasting: 5\nI0506 18:03:39.786069    2261 log.go:172] (0xc0009d0b00) Reply frame received for 5\nI0506 18:03:39.860365    2261 log.go:172] (0xc0009d0b00) Data frame received for 5\nI0506 18:03:39.860386    2261 log.go:172] (0xc0007d40a0) (5) Data frame handling\nI0506 18:03:39.860397    2261 log.go:172] (0xc0007d40a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0506 18:03:40.006115    2261 log.go:172] (0xc0009d0b00) Data frame received for 5\nI0506 18:03:40.006165    2261 log.go:172] (0xc0007d40a0) (5) Data frame handling\nI0506 18:03:40.006191    2261 log.go:172] (0xc0007d40a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0506 18:03:40.008182    2261 log.go:172] (0xc0009d0b00) Data frame received for 5\nI0506 18:03:40.008237    2261 log.go:172] (0xc0007d40a0) (5) Data frame handling\nI0506 18:03:40.008271    2261 log.go:172] (0xc0009d0b00) Data frame received for 3\nI0506 18:03:40.008285    2261 log.go:172] (0xc000663220) (3) Data frame handling\nI0506 18:03:40.009636    2261 log.go:172] (0xc0009d0b00) Data frame received for 1\nI0506 18:03:40.009673    2261 log.go:172] (0xc0009a6140) (1) Data frame handling\nI0506 18:03:40.009703    2261 log.go:172] (0xc0009a6140) (1) Data frame sent\nI0506 18:03:40.009735    2261 log.go:172] (0xc0009d0b00) (0xc0009a6140) Stream removed, broadcasting: 1\nI0506 18:03:40.009756    2261 log.go:172] (0xc0009d0b00) Go away received\nI0506 18:03:40.010136    2261 log.go:172] (0xc0009d0b00) (0xc0009a6140) Stream removed, broadcasting: 1\nI0506 18:03:40.010157    2261 log.go:172] (0xc0009d0b00) (0xc000663220) Stream removed, broadcasting: 3\nI0506 18:03:40.010167    2261 log.go:172] (0xc0009d0b00) (0xc0007d40a0) Stream removed, broadcasting: 5\n"
May  6 18:03:40.014: INFO: stdout: ""
May  6 18:03:40.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5553 execpodvt7l7 -- /bin/sh -x -c nc -zv -t -w 2 10.110.42.4 80'
May  6 18:03:40.219: INFO: stderr: "I0506 18:03:40.151913    2283 log.go:172] (0xc0005f4b00) (0xc00068d540) Create stream\nI0506 18:03:40.151969    2283 log.go:172] (0xc0005f4b00) (0xc00068d540) Stream added, broadcasting: 1\nI0506 18:03:40.154444    2283 log.go:172] (0xc0005f4b00) Reply frame received for 1\nI0506 18:03:40.154490    2283 log.go:172] (0xc0005f4b00) (0xc000996000) Create stream\nI0506 18:03:40.154508    2283 log.go:172] (0xc0005f4b00) (0xc000996000) Stream added, broadcasting: 3\nI0506 18:03:40.155436    2283 log.go:172] (0xc0005f4b00) Reply frame received for 3\nI0506 18:03:40.155469    2283 log.go:172] (0xc0005f4b00) (0xc00068d5e0) Create stream\nI0506 18:03:40.155484    2283 log.go:172] (0xc0005f4b00) (0xc00068d5e0) Stream added, broadcasting: 5\nI0506 18:03:40.156228    2283 log.go:172] (0xc0005f4b00) Reply frame received for 5\nI0506 18:03:40.213422    2283 log.go:172] (0xc0005f4b00) Data frame received for 5\nI0506 18:03:40.213464    2283 log.go:172] (0xc00068d5e0) (5) Data frame handling\nI0506 18:03:40.213478    2283 log.go:172] (0xc00068d5e0) (5) Data frame sent\nI0506 18:03:40.213485    2283 log.go:172] (0xc0005f4b00) Data frame received for 5\nI0506 18:03:40.213490    2283 log.go:172] (0xc00068d5e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.42.4 80\nConnection to 10.110.42.4 80 port [tcp/http] succeeded!\nI0506 18:03:40.213509    2283 log.go:172] (0xc0005f4b00) Data frame received for 3\nI0506 18:03:40.213515    2283 log.go:172] (0xc000996000) (3) Data frame handling\nI0506 18:03:40.214571    2283 log.go:172] (0xc0005f4b00) Data frame received for 1\nI0506 18:03:40.214587    2283 log.go:172] (0xc00068d540) (1) Data frame handling\nI0506 18:03:40.214599    2283 log.go:172] (0xc00068d540) (1) Data frame sent\nI0506 18:03:40.214802    2283 log.go:172] (0xc0005f4b00) (0xc00068d540) Stream removed, broadcasting: 1\nI0506 18:03:40.214833    2283 log.go:172] (0xc0005f4b00) Go away received\nI0506 18:03:40.215133    2283 log.go:172] (0xc0005f4b00) (0xc00068d540) Stream removed, broadcasting: 1\nI0506 18:03:40.215152    2283 log.go:172] (0xc0005f4b00) (0xc000996000) Stream removed, broadcasting: 3\nI0506 18:03:40.215167    2283 log.go:172] (0xc0005f4b00) (0xc00068d5e0) Stream removed, broadcasting: 5\n"
May  6 18:03:40.219: INFO: stdout: ""
May  6 18:03:40.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5553 execpodvt7l7 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 31537'
May  6 18:03:40.415: INFO: stderr: "I0506 18:03:40.335575    2306 log.go:172] (0xc000b24210) (0xc000b500a0) Create stream\nI0506 18:03:40.335616    2306 log.go:172] (0xc000b24210) (0xc000b500a0) Stream added, broadcasting: 1\nI0506 18:03:40.337888    2306 log.go:172] (0xc000b24210) Reply frame received for 1\nI0506 18:03:40.337929    2306 log.go:172] (0xc000b24210) (0xc0006b9220) Create stream\nI0506 18:03:40.337950    2306 log.go:172] (0xc000b24210) (0xc0006b9220) Stream added, broadcasting: 3\nI0506 18:03:40.338684    2306 log.go:172] (0xc000b24210) Reply frame received for 3\nI0506 18:03:40.338711    2306 log.go:172] (0xc000b24210) (0xc00080a000) Create stream\nI0506 18:03:40.338720    2306 log.go:172] (0xc000b24210) (0xc00080a000) Stream added, broadcasting: 5\nI0506 18:03:40.339335    2306 log.go:172] (0xc000b24210) Reply frame received for 5\nI0506 18:03:40.411558    2306 log.go:172] (0xc000b24210) Data frame received for 3\nI0506 18:03:40.411592    2306 log.go:172] (0xc0006b9220) (3) Data frame handling\nI0506 18:03:40.411612    2306 log.go:172] (0xc000b24210) Data frame received for 5\nI0506 18:03:40.411622    2306 log.go:172] (0xc00080a000) (5) Data frame handling\nI0506 18:03:40.411632    2306 log.go:172] (0xc00080a000) (5) Data frame sent\nI0506 18:03:40.411640    2306 log.go:172] (0xc000b24210) Data frame received for 5\nI0506 18:03:40.411646    2306 log.go:172] (0xc00080a000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 31537\nConnection to 172.17.0.15 31537 port [tcp/31537] succeeded!\nI0506 18:03:40.412485    2306 log.go:172] (0xc000b24210) Data frame received for 1\nI0506 18:03:40.412508    2306 log.go:172] (0xc000b500a0) (1) Data frame handling\nI0506 18:03:40.412537    2306 log.go:172] (0xc000b500a0) (1) Data frame sent\nI0506 18:03:40.412556    2306 log.go:172] (0xc000b24210) (0xc000b500a0) Stream removed, broadcasting: 1\nI0506 18:03:40.412569    2306 log.go:172] (0xc000b24210) Go away received\nI0506 18:03:40.412850    2306 log.go:172] (0xc000b24210) (0xc000b500a0) Stream removed, broadcasting: 1\nI0506 18:03:40.412867    2306 log.go:172] (0xc000b24210) (0xc0006b9220) Stream removed, broadcasting: 3\nI0506 18:03:40.412875    2306 log.go:172] (0xc000b24210) (0xc00080a000) Stream removed, broadcasting: 5\n"
May  6 18:03:40.415: INFO: stdout: ""
May  6 18:03:40.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-5553 execpodvt7l7 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 31537'
May  6 18:03:40.620: INFO: stderr: "I0506 18:03:40.540754    2324 log.go:172] (0xc000a3d080) (0xc000a62500) Create stream\nI0506 18:03:40.540806    2324 log.go:172] (0xc000a3d080) (0xc000a62500) Stream added, broadcasting: 1\nI0506 18:03:40.543108    2324 log.go:172] (0xc000a3d080) Reply frame received for 1\nI0506 18:03:40.543137    2324 log.go:172] (0xc000a3d080) (0xc000a625a0) Create stream\nI0506 18:03:40.543145    2324 log.go:172] (0xc000a3d080) (0xc000a625a0) Stream added, broadcasting: 3\nI0506 18:03:40.543968    2324 log.go:172] (0xc000a3d080) Reply frame received for 3\nI0506 18:03:40.544000    2324 log.go:172] (0xc000a3d080) (0xc000a62640) Create stream\nI0506 18:03:40.544011    2324 log.go:172] (0xc000a3d080) (0xc000a62640) Stream added, broadcasting: 5\nI0506 18:03:40.544733    2324 log.go:172] (0xc000a3d080) Reply frame received for 5\nI0506 18:03:40.614890    2324 log.go:172] (0xc000a3d080) Data frame received for 3\nI0506 18:03:40.614925    2324 log.go:172] (0xc000a625a0) (3) Data frame handling\nI0506 18:03:40.614956    2324 log.go:172] (0xc000a3d080) Data frame received for 5\nI0506 18:03:40.614974    2324 log.go:172] (0xc000a62640) (5) Data frame handling\nI0506 18:03:40.615012    2324 log.go:172] (0xc000a62640) (5) Data frame sent\nI0506 18:03:40.615028    2324 log.go:172] (0xc000a3d080) Data frame received for 5\nI0506 18:03:40.615036    2324 log.go:172] (0xc000a62640) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 31537\nConnection to 172.17.0.18 31537 port [tcp/31537] succeeded!\nI0506 18:03:40.617545    2324 log.go:172] (0xc000a3d080) Data frame received for 1\nI0506 18:03:40.617563    2324 log.go:172] (0xc000a62500) (1) Data frame handling\nI0506 18:03:40.617582    2324 log.go:172] (0xc000a62500) (1) Data frame sent\nI0506 18:03:40.617596    2324 log.go:172] (0xc000a3d080) (0xc000a62500) Stream removed, broadcasting: 1\nI0506 18:03:40.617611    2324 log.go:172] (0xc000a3d080) Go away received\nI0506 18:03:40.617938    2324 log.go:172] (0xc000a3d080) (0xc000a62500) Stream removed, broadcasting: 1\nI0506 18:03:40.617953    2324 log.go:172] (0xc000a3d080) (0xc000a625a0) Stream removed, broadcasting: 3\nI0506 18:03:40.617961    2324 log.go:172] (0xc000a3d080) (0xc000a62640) Stream removed, broadcasting: 5\n"
May  6 18:03:40.620: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:03:40.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5553" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:17.656 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":92,"skipped":1582,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:03:40.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:03:40.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-3811
I0506 18:03:41.329751       7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3811, replica count: 1
I0506 18:03:42.380119       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:03:43.380297       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:03:44.380512       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:03:45.380667       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:03:46.380846       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:03:47.381042       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:03:48.381298       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:03:49.381504       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:03:50.381748       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  6 18:03:50.878: INFO: Created: latency-svc-zvtv2
May  6 18:03:51.124: INFO: Got endpoints: latency-svc-zvtv2 [642.970957ms]
May  6 18:03:51.545: INFO: Created: latency-svc-t9qzq
May  6 18:03:52.175: INFO: Got endpoints: latency-svc-t9qzq [1.050238429s]
May  6 18:03:52.497: INFO: Created: latency-svc-mpgxx
May  6 18:03:52.563: INFO: Got endpoints: latency-svc-mpgxx [1.437296418s]
May  6 18:03:53.136: INFO: Created: latency-svc-m8ggs
May  6 18:03:53.203: INFO: Got endpoints: latency-svc-m8ggs [2.07860004s]
May  6 18:03:53.431: INFO: Created: latency-svc-8d5j9
May  6 18:03:53.497: INFO: Got endpoints: latency-svc-8d5j9 [2.371840113s]
May  6 18:03:53.670: INFO: Created: latency-svc-wr4fk
May  6 18:03:53.708: INFO: Got endpoints: latency-svc-wr4fk [2.583459078s]
May  6 18:03:53.809: INFO: Created: latency-svc-s9pt2
May  6 18:03:53.852: INFO: Got endpoints: latency-svc-s9pt2 [2.726947936s]
May  6 18:03:53.963: INFO: Created: latency-svc-dzgr8
May  6 18:03:54.006: INFO: Got endpoints: latency-svc-dzgr8 [2.880843215s]
May  6 18:03:54.007: INFO: Created: latency-svc-2dpjb
May  6 18:03:54.058: INFO: Got endpoints: latency-svc-2dpjb [2.933028003s]
May  6 18:03:54.136: INFO: Created: latency-svc-f4gpx
May  6 18:03:54.162: INFO: Got endpoints: latency-svc-f4gpx [3.037388043s]
May  6 18:03:54.264: INFO: Created: latency-svc-fj6h7
May  6 18:03:54.284: INFO: Got endpoints: latency-svc-fj6h7 [3.158734219s]
May  6 18:03:54.312: INFO: Created: latency-svc-dmx5p
May  6 18:03:54.363: INFO: Got endpoints: latency-svc-dmx5p [3.237447354s]
May  6 18:03:54.417: INFO: Created: latency-svc-bb64b
May  6 18:03:54.418: INFO: Got endpoints: latency-svc-bb64b [3.292492057s]
May  6 18:03:54.654: INFO: Created: latency-svc-fxct6
May  6 18:03:55.312: INFO: Got endpoints: latency-svc-fxct6 [4.186875401s]
May  6 18:03:55.316: INFO: Created: latency-svc-6gkvn
May  6 18:03:55.504: INFO: Got endpoints: latency-svc-6gkvn [4.378472156s]
May  6 18:03:55.643: INFO: Created: latency-svc-8dh7r
May  6 18:03:55.682: INFO: Got endpoints: latency-svc-8dh7r [4.556528874s]
May  6 18:03:55.792: INFO: Created: latency-svc-sv7tv
May  6 18:03:55.795: INFO: Got endpoints: latency-svc-sv7tv [3.619732425s]
May  6 18:03:55.852: INFO: Created: latency-svc-w98jv
May  6 18:03:56.368: INFO: Got endpoints: latency-svc-w98jv [3.805557673s]
May  6 18:03:56.862: INFO: Created: latency-svc-nn68c
May  6 18:03:57.098: INFO: Got endpoints: latency-svc-nn68c [3.894445629s]
May  6 18:03:57.335: INFO: Created: latency-svc-9t9lh
May  6 18:03:57.371: INFO: Got endpoints: latency-svc-9t9lh [3.87387703s]
May  6 18:03:57.533: INFO: Created: latency-svc-zkz45
May  6 18:03:57.851: INFO: Got endpoints: latency-svc-zkz45 [4.142935197s]
May  6 18:03:57.928: INFO: Created: latency-svc-474x7
May  6 18:03:58.038: INFO: Got endpoints: latency-svc-474x7 [4.18595214s]
May  6 18:03:58.235: INFO: Created: latency-svc-2mmxm
May  6 18:03:58.277: INFO: Got endpoints: latency-svc-2mmxm [4.270913702s]
May  6 18:03:58.333: INFO: Created: latency-svc-rqd7v
May  6 18:03:58.407: INFO: Got endpoints: latency-svc-rqd7v [4.349229535s]
May  6 18:03:58.423: INFO: Created: latency-svc-8r9jw
May  6 18:03:58.439: INFO: Got endpoints: latency-svc-8r9jw [4.276405205s]
May  6 18:03:58.484: INFO: Created: latency-svc-4b2jc
May  6 18:03:58.539: INFO: Got endpoints: latency-svc-4b2jc [4.255262613s]
May  6 18:03:58.558: INFO: Created: latency-svc-26qjz
May  6 18:03:58.566: INFO: Got endpoints: latency-svc-26qjz [4.203156997s]
May  6 18:03:58.583: INFO: Created: latency-svc-km8w8
May  6 18:03:58.589: INFO: Got endpoints: latency-svc-km8w8 [4.171754199s]
May  6 18:03:58.607: INFO: Created: latency-svc-974ch
May  6 18:03:58.621: INFO: Got endpoints: latency-svc-974ch [3.308812993s]
May  6 18:03:58.702: INFO: Created: latency-svc-cmzh4
May  6 18:03:58.710: INFO: Got endpoints: latency-svc-cmzh4 [3.206589081s]
May  6 18:03:58.729: INFO: Created: latency-svc-dc2fm
May  6 18:03:58.772: INFO: Got endpoints: latency-svc-dc2fm [3.089961493s]
May  6 18:03:58.846: INFO: Created: latency-svc-jbmf5
May  6 18:03:58.856: INFO: Got endpoints: latency-svc-jbmf5 [3.061734749s]
May  6 18:03:58.877: INFO: Created: latency-svc-gndxg
May  6 18:03:58.890: INFO: Got endpoints: latency-svc-gndxg [2.522170214s]
May  6 18:03:58.922: INFO: Created: latency-svc-k5zvc
May  6 18:03:58.988: INFO: Got endpoints: latency-svc-k5zvc [1.890106002s]
May  6 18:03:59.040: INFO: Created: latency-svc-s568d
May  6 18:03:59.053: INFO: Got endpoints: latency-svc-s568d [1.682336901s]
May  6 18:03:59.725: INFO: Created: latency-svc-t24rl
May  6 18:03:59.977: INFO: Got endpoints: latency-svc-t24rl [2.125735835s]
May  6 18:04:00.215: INFO: Created: latency-svc-r8rnk
May  6 18:04:00.306: INFO: Got endpoints: latency-svc-r8rnk [2.267866485s]
May  6 18:04:00.899: INFO: Created: latency-svc-hrjdk
May  6 18:04:00.940: INFO: Got endpoints: latency-svc-hrjdk [2.663350753s]
May  6 18:04:01.294: INFO: Created: latency-svc-mjss6
May  6 18:04:01.595: INFO: Got endpoints: latency-svc-mjss6 [3.187129056s]
May  6 18:04:01.597: INFO: Created: latency-svc-fgpr5
May  6 18:04:01.816: INFO: Got endpoints: latency-svc-fgpr5 [3.377133012s]
May  6 18:04:02.079: INFO: Created: latency-svc-5wxlz
May  6 18:04:02.342: INFO: Got endpoints: latency-svc-5wxlz [3.802836388s]
May  6 18:04:02.581: INFO: Created: latency-svc-8r7b5
May  6 18:04:02.731: INFO: Got endpoints: latency-svc-8r7b5 [4.165461176s]
May  6 18:04:02.763: INFO: Created: latency-svc-flf25
May  6 18:04:02.769: INFO: Got endpoints: latency-svc-flf25 [4.179467828s]
May  6 18:04:02.817: INFO: Created: latency-svc-qmhjh
May  6 18:04:02.899: INFO: Got endpoints: latency-svc-qmhjh [4.277849056s]
May  6 18:04:02.932: INFO: Created: latency-svc-m6fkt
May  6 18:04:02.956: INFO: Got endpoints: latency-svc-m6fkt [4.244994961s]
May  6 18:04:03.061: INFO: Created: latency-svc-xskgg
May  6 18:04:03.084: INFO: Got endpoints: latency-svc-xskgg [4.312186316s]
May  6 18:04:03.114: INFO: Created: latency-svc-tbfw4
May  6 18:04:03.124: INFO: Got endpoints: latency-svc-tbfw4 [4.267382759s]
May  6 18:04:03.158: INFO: Created: latency-svc-j9b6l
May  6 18:04:03.252: INFO: Got endpoints: latency-svc-j9b6l [4.361399424s]
May  6 18:04:03.283: INFO: Created: latency-svc-m4lc4
May  6 18:04:03.292: INFO: Got endpoints: latency-svc-m4lc4 [4.304239583s]
May  6 18:04:03.319: INFO: Created: latency-svc-rkd7n
May  6 18:04:03.341: INFO: Got endpoints: latency-svc-rkd7n [4.287500925s]
May  6 18:04:03.432: INFO: Created: latency-svc-8sznq
May  6 18:04:03.468: INFO: Got endpoints: latency-svc-8sznq [3.491349848s]
May  6 18:04:03.519: INFO: Created: latency-svc-gpfrg
May  6 18:04:03.563: INFO: Got endpoints: latency-svc-gpfrg [3.257349538s]
May  6 18:04:03.607: INFO: Created: latency-svc-vqm62
May  6 18:04:03.617: INFO: Got endpoints: latency-svc-vqm62 [2.676857949s]
May  6 18:04:03.660: INFO: Created: latency-svc-5h76z
May  6 18:04:03.725: INFO: Got endpoints: latency-svc-5h76z [2.130342111s]
May  6 18:04:03.727: INFO: Created: latency-svc-sttb8
May  6 18:04:03.732: INFO: Got endpoints: latency-svc-sttb8 [1.915607795s]
May  6 18:04:03.758: INFO: Created: latency-svc-sqbmg
May  6 18:04:03.774: INFO: Got endpoints: latency-svc-sqbmg [1.432145839s]
May  6 18:04:03.869: INFO: Created: latency-svc-s5r5f
May  6 18:04:03.882: INFO: Got endpoints: latency-svc-s5r5f [1.15083648s]
May  6 18:04:03.900: INFO: Created: latency-svc-rzsh2
May  6 18:04:03.918: INFO: Got endpoints: latency-svc-rzsh2 [1.149335038s]
May  6 18:04:04.498: INFO: Created: latency-svc-9zpcs
May  6 18:04:04.531: INFO: Got endpoints: latency-svc-9zpcs [1.631959339s]
May  6 18:04:04.563: INFO: Created: latency-svc-8p5kk
May  6 18:04:04.579: INFO: Got endpoints: latency-svc-8p5kk [1.623007597s]
May  6 18:04:04.641: INFO: Created: latency-svc-hvsms
May  6 18:04:04.971: INFO: Got endpoints: latency-svc-hvsms [1.887115205s]
May  6 18:04:04.999: INFO: Created: latency-svc-9d7lf
May  6 18:04:05.133: INFO: Got endpoints: latency-svc-9d7lf [2.008581381s]
May  6 18:04:05.217: INFO: Created: latency-svc-vh25l
May  6 18:04:05.288: INFO: Got endpoints: latency-svc-vh25l [2.036515597s]
May  6 18:04:05.317: INFO: Created: latency-svc-tfxsq
May  6 18:04:05.334: INFO: Got endpoints: latency-svc-tfxsq [2.041730883s]
May  6 18:04:05.362: INFO: Created: latency-svc-9kg5k
May  6 18:04:05.376: INFO: Got endpoints: latency-svc-9kg5k [2.035026276s]
May  6 18:04:05.461: INFO: Created: latency-svc-srgqq
May  6 18:04:05.472: INFO: Got endpoints: latency-svc-srgqq [2.003680513s]
May  6 18:04:05.496: INFO: Created: latency-svc-r842f
May  6 18:04:05.515: INFO: Got endpoints: latency-svc-r842f [1.951216034s]
May  6 18:04:05.665: INFO: Created: latency-svc-dbtvs
May  6 18:04:05.689: INFO: Got endpoints: latency-svc-dbtvs [2.071350395s]
May  6 18:04:05.709: INFO: Created: latency-svc-b9tj9
May  6 18:04:05.779: INFO: Got endpoints: latency-svc-b9tj9 [2.054072534s]
May  6 18:04:05.854: INFO: Created: latency-svc-x4dc7
May  6 18:04:05.905: INFO: Got endpoints: latency-svc-x4dc7 [2.173452556s]
May  6 18:04:05.980: INFO: Created: latency-svc-hg58n
May  6 18:04:06.096: INFO: Got endpoints: latency-svc-hg58n [2.321848512s]
May  6 18:04:06.190: INFO: Created: latency-svc-vxzxh
May  6 18:04:06.300: INFO: Got endpoints: latency-svc-vxzxh [2.418009287s]
May  6 18:04:06.346: INFO: Created: latency-svc-wtnst
May  6 18:04:06.379: INFO: Got endpoints: latency-svc-wtnst [2.460521997s]
May  6 18:04:06.475: INFO: Created: latency-svc-hrrwp
May  6 18:04:06.485: INFO: Got endpoints: latency-svc-hrrwp [1.954287344s]
May  6 18:04:06.501: INFO: Created: latency-svc-zbwns
May  6 18:04:06.518: INFO: Got endpoints: latency-svc-zbwns [1.938910176s]
May  6 18:04:06.568: INFO: Created: latency-svc-h8nb7
May  6 18:04:06.637: INFO: Got endpoints: latency-svc-h8nb7 [1.665898318s]
May  6 18:04:06.668: INFO: Created: latency-svc-knln7
May  6 18:04:06.680: INFO: Got endpoints: latency-svc-knln7 [1.547147636s]
May  6 18:04:06.767: INFO: Created: latency-svc-bcnx6
May  6 18:04:06.771: INFO: Got endpoints: latency-svc-bcnx6 [1.482066169s]
May  6 18:04:06.826: INFO: Created: latency-svc-wrtjt
May  6 18:04:06.849: INFO: Got endpoints: latency-svc-wrtjt [1.515095467s]
May  6 18:04:06.958: INFO: Created: latency-svc-kd2wj
May  6 18:04:06.987: INFO: Got endpoints: latency-svc-kd2wj [1.610525397s]
May  6 18:04:07.023: INFO: Created: latency-svc-4k6j5
May  6 18:04:07.041: INFO: Got endpoints: latency-svc-4k6j5 [1.56926826s]
May  6 18:04:07.121: INFO: Created: latency-svc-pxpgf
May  6 18:04:07.173: INFO: Got endpoints: latency-svc-pxpgf [1.658427603s]
May  6 18:04:07.204: INFO: Created: latency-svc-p5fxb
May  6 18:04:07.306: INFO: Got endpoints: latency-svc-p5fxb [1.617285858s]
May  6 18:04:07.307: INFO: Created: latency-svc-hcgx4
May  6 18:04:07.333: INFO: Got endpoints: latency-svc-hcgx4 [1.553841051s]
May  6 18:04:07.368: INFO: Created: latency-svc-fh94s
May  6 18:04:07.379: INFO: Got endpoints: latency-svc-fh94s [1.473622657s]
May  6 18:04:07.479: INFO: Created: latency-svc-kkdbm
May  6 18:04:07.900: INFO: Got endpoints: latency-svc-kkdbm [1.803841804s]
May  6 18:04:07.902: INFO: Created: latency-svc-dz78b
May  6 18:04:08.035: INFO: Got endpoints: latency-svc-dz78b [1.734715154s]
May  6 18:04:08.050: INFO: Created: latency-svc-v5wfn
May  6 18:04:08.055: INFO: Got endpoints: latency-svc-v5wfn [1.676578321s]
May  6 18:04:08.110: INFO: Created: latency-svc-j49h8
May  6 18:04:08.180: INFO: Got endpoints: latency-svc-j49h8 [1.694499497s]
May  6 18:04:08.183: INFO: Created: latency-svc-9w9gw
May  6 18:04:08.213: INFO: Got endpoints: latency-svc-9w9gw [1.695505341s]
May  6 18:04:08.570: INFO: Created: latency-svc-xdj5s
May  6 18:04:08.612: INFO: Got endpoints: latency-svc-xdj5s [1.974658922s]
May  6 18:04:08.756: INFO: Created: latency-svc-rzxjv
May  6 18:04:08.776: INFO: Got endpoints: latency-svc-rzxjv [2.096714137s]
May  6 18:04:08.815: INFO: Created: latency-svc-sqwpv
May  6 18:04:08.843: INFO: Got endpoints: latency-svc-sqwpv [2.072322632s]
May  6 18:04:08.911: INFO: Created: latency-svc-dwc88
May  6 18:04:08.948: INFO: Got endpoints: latency-svc-dwc88 [2.098855924s]
May  6 18:04:09.100: INFO: Created: latency-svc-z896c
May  6 18:04:09.103: INFO: Got endpoints: latency-svc-z896c [2.116716431s]
May  6 18:04:09.131: INFO: Created: latency-svc-bgl6c
May  6 18:04:09.151: INFO: Got endpoints: latency-svc-bgl6c [2.109289334s]
May  6 18:04:09.246: INFO: Created: latency-svc-7t92m
May  6 18:04:09.263: INFO: Got endpoints: latency-svc-7t92m [2.090198145s]
May  6 18:04:09.450: INFO: Created: latency-svc-h5qcb
May  6 18:04:09.462: INFO: Got endpoints: latency-svc-h5qcb [2.155784546s]
May  6 18:04:09.777: INFO: Created: latency-svc-68nn2
May  6 18:04:09.951: INFO: Got endpoints: latency-svc-68nn2 [2.617779405s]
May  6 18:04:10.025: INFO: Created: latency-svc-wth9j
May  6 18:04:10.074: INFO: Got endpoints: latency-svc-wth9j [2.694673441s]
May  6 18:04:10.404: INFO: Created: latency-svc-nvvzg
May  6 18:04:10.444: INFO: Got endpoints: latency-svc-nvvzg [2.543697492s]
May  6 18:04:10.494: INFO: Created: latency-svc-hm7mj
May  6 18:04:10.671: INFO: Got endpoints: latency-svc-hm7mj [2.635816613s]
May  6 18:04:10.683: INFO: Created: latency-svc-tbm6j
May  6 18:04:10.770: INFO: Got endpoints: latency-svc-tbm6j [2.714230811s]
May  6 18:04:11.061: INFO: Created: latency-svc-89s66
May  6 18:04:11.061: INFO: Created: latency-svc-klhgf
May  6 18:04:11.066: INFO: Got endpoints: latency-svc-klhgf [2.886408414s]
May  6 18:04:11.141: INFO: Got endpoints: latency-svc-89s66 [2.927708127s]
May  6 18:04:11.363: INFO: Created: latency-svc-p7g4f
May  6 18:04:11.366: INFO: Got endpoints: latency-svc-p7g4f [2.754562062s]
May  6 18:04:11.691: INFO: Created: latency-svc-6v798
May  6 18:04:11.728: INFO: Got endpoints: latency-svc-6v798 [2.951945704s]
May  6 18:04:12.067: INFO: Created: latency-svc-wx79z
May  6 18:04:12.119: INFO: Got endpoints: latency-svc-wx79z [3.276427233s]
May  6 18:04:12.122: INFO: Created: latency-svc-f92f6
May  6 18:04:12.198: INFO: Got endpoints: latency-svc-f92f6 [3.249849722s]
May  6 18:04:12.254: INFO: Created: latency-svc-62fww
May  6 18:04:12.268: INFO: Got endpoints: latency-svc-62fww [3.164715098s]
May  6 18:04:12.367: INFO: Created: latency-svc-s8s7z
May  6 18:04:12.382: INFO: Got endpoints: latency-svc-s8s7z [3.231566689s]
May  6 18:04:12.416: INFO: Created: latency-svc-cq484
May  6 18:04:12.449: INFO: Got endpoints: latency-svc-cq484 [3.18588758s]
May  6 18:04:12.533: INFO: Created: latency-svc-mwztn
May  6 18:04:12.544: INFO: Got endpoints: latency-svc-mwztn [3.08243361s]
May  6 18:04:12.574: INFO: Created: latency-svc-fc94x
May  6 18:04:12.594: INFO: Got endpoints: latency-svc-fc94x [2.642679496s]
May  6 18:04:12.617: INFO: Created: latency-svc-mzk4s
May  6 18:04:12.683: INFO: Got endpoints: latency-svc-mzk4s [2.60905386s]
May  6 18:04:12.713: INFO: Created: latency-svc-pjb2r
May  6 18:04:12.750: INFO: Got endpoints: latency-svc-pjb2r [2.305848896s]
May  6 18:04:12.770: INFO: Created: latency-svc-vbc5w
May  6 18:04:12.815: INFO: Got endpoints: latency-svc-vbc5w [2.143927252s]
May  6 18:04:12.831: INFO: Created: latency-svc-bzn8j
May  6 18:04:12.863: INFO: Got endpoints: latency-svc-bzn8j [2.093214506s]
May  6 18:04:13.121: INFO: Created: latency-svc-ff2bx
May  6 18:04:13.126: INFO: Got endpoints: latency-svc-ff2bx [2.05942066s]
May  6 18:04:13.540: INFO: Created: latency-svc-xlv85
May  6 18:04:13.560: INFO: Got endpoints: latency-svc-xlv85 [2.418954886s]
May  6 18:04:13.638: INFO: Created: latency-svc-wvqnp
May  6 18:04:13.677: INFO: Got endpoints: latency-svc-wvqnp [2.310788685s]
May  6 18:04:13.679: INFO: Created: latency-svc-d4dmh
May  6 18:04:13.686: INFO: Got endpoints: latency-svc-d4dmh [1.957341818s]
May  6 18:04:13.710: INFO: Created: latency-svc-2t9vw
May  6 18:04:13.716: INFO: Got endpoints: latency-svc-2t9vw [1.596475253s]
May  6 18:04:13.743: INFO: Created: latency-svc-77gpb
May  6 18:04:13.759: INFO: Got endpoints: latency-svc-77gpb [1.560782252s]
May  6 18:04:13.875: INFO: Created: latency-svc-7lqxj
May  6 18:04:13.918: INFO: Got endpoints: latency-svc-7lqxj [1.649967805s]
May  6 18:04:13.921: INFO: Created: latency-svc-8vnjq
May  6 18:04:13.970: INFO: Got endpoints: latency-svc-8vnjq [1.587343191s]
May  6 18:04:14.078: INFO: Created: latency-svc-v45hh
May  6 18:04:14.516: INFO: Got endpoints: latency-svc-v45hh [2.066644345s]
May  6 18:04:14.524: INFO: Created: latency-svc-j6jbh
May  6 18:04:14.527: INFO: Got endpoints: latency-svc-j6jbh [1.982984229s]
May  6 18:04:14.671: INFO: Created: latency-svc-7pflq
May  6 18:04:14.680: INFO: Got endpoints: latency-svc-7pflq [2.085950505s]
May  6 18:04:14.917: INFO: Created: latency-svc-bt46l
May  6 18:04:14.945: INFO: Got endpoints: latency-svc-bt46l [2.262330614s]
May  6 18:04:15.066: INFO: Created: latency-svc-qbw5c
May  6 18:04:15.113: INFO: Got endpoints: latency-svc-qbw5c [2.362728458s]
May  6 18:04:15.152: INFO: Created: latency-svc-pbwdh
May  6 18:04:15.252: INFO: Got endpoints: latency-svc-pbwdh [2.437062637s]
May  6 18:04:15.288: INFO: Created: latency-svc-6jm8x
May  6 18:04:15.307: INFO: Got endpoints: latency-svc-6jm8x [2.443775464s]
May  6 18:04:15.350: INFO: Created: latency-svc-t594v
May  6 18:04:15.426: INFO: Got endpoints: latency-svc-t594v [2.30031427s]
May  6 18:04:15.428: INFO: Created: latency-svc-7ptrs
May  6 18:04:15.463: INFO: Got endpoints: latency-svc-7ptrs [1.903286681s]
May  6 18:04:15.504: INFO: Created: latency-svc-cnzk4
May  6 18:04:15.524: INFO: Got endpoints: latency-svc-cnzk4 [1.846307405s]
May  6 18:04:15.599: INFO: Created: latency-svc-z8l9l
May  6 18:04:15.639: INFO: Got endpoints: latency-svc-z8l9l [1.95281693s]
May  6 18:04:15.672: INFO: Created: latency-svc-m7kdj
May  6 18:04:15.692: INFO: Got endpoints: latency-svc-m7kdj [1.976244039s]
May  6 18:04:15.840: INFO: Created: latency-svc-8t4rw
May  6 18:04:15.906: INFO: Created: latency-svc-hs4jd
May  6 18:04:15.906: INFO: Got endpoints: latency-svc-8t4rw [2.147232467s]
May  6 18:04:16.007: INFO: Got endpoints: latency-svc-hs4jd [2.08913988s]
May  6 18:04:16.062: INFO: Created: latency-svc-p6dwv
May  6 18:04:16.168: INFO: Got endpoints: latency-svc-p6dwv [2.197966078s]
May  6 18:04:16.168: INFO: Created: latency-svc-4gss4
May  6 18:04:16.171: INFO: Got endpoints: latency-svc-4gss4 [1.65444213s]
May  6 18:04:16.257: INFO: Created: latency-svc-v6j86
May  6 18:04:16.300: INFO: Got endpoints: latency-svc-v6j86 [1.772551046s]
May  6 18:04:16.329: INFO: Created: latency-svc-w6s6v
May  6 18:04:16.364: INFO: Got endpoints: latency-svc-w6s6v [1.684572166s]
May  6 18:04:16.494: INFO: Created: latency-svc-n2j2k
May  6 18:04:16.832: INFO: Got endpoints: latency-svc-n2j2k [1.886492787s]
May  6 18:04:16.832: INFO: Created: latency-svc-fk8hc
May  6 18:04:16.867: INFO: Got endpoints: latency-svc-fk8hc [1.754461059s]
May  6 18:04:17.025: INFO: Created: latency-svc-bnzrp
May  6 18:04:17.030: INFO: Got endpoints: latency-svc-bnzrp [1.777460607s]
May  6 18:04:17.337: INFO: Created: latency-svc-bp7zk
May  6 18:04:17.340: INFO: Got endpoints: latency-svc-bp7zk [2.033422606s]
May  6 18:04:17.395: INFO: Created: latency-svc-wdllb
May  6 18:04:17.426: INFO: Got endpoints: latency-svc-wdllb [2.00035328s]
May  6 18:04:17.519: INFO: Created: latency-svc-jd4ql
May  6 18:04:17.552: INFO: Got endpoints: latency-svc-jd4ql [2.088606074s]
May  6 18:04:17.582: INFO: Created: latency-svc-fw8lw
May  6 18:04:17.594: INFO: Got endpoints: latency-svc-fw8lw [2.070780215s]
May  6 18:04:17.654: INFO: Created: latency-svc-x25dm
May  6 18:04:17.667: INFO: Got endpoints: latency-svc-x25dm [2.028154529s]
May  6 18:04:17.720: INFO: Created: latency-svc-jdcvs
May  6 18:04:17.733: INFO: Got endpoints: latency-svc-jdcvs [2.040729599s]
May  6 18:04:17.802: INFO: Created: latency-svc-f6cqk
May  6 18:04:17.997: INFO: Got endpoints: latency-svc-f6cqk [2.091290833s]
May  6 18:04:18.211: INFO: Created: latency-svc-2p7tz
May  6 18:04:18.218: INFO: Got endpoints: latency-svc-2p7tz [2.210803394s]
May  6 18:04:18.272: INFO: Created: latency-svc-s7x6t
May  6 18:04:18.298: INFO: Got endpoints: latency-svc-s7x6t [2.130314341s]
May  6 18:04:18.379: INFO: Created: latency-svc-k4vv5
May  6 18:04:18.444: INFO: Got endpoints: latency-svc-k4vv5 [2.273162375s]
May  6 18:04:18.446: INFO: Created: latency-svc-jhrhf
May  6 18:04:18.453: INFO: Got endpoints: latency-svc-jhrhf [2.152972881s]
May  6 18:04:18.515: INFO: Created: latency-svc-x9vl9
May  6 18:04:18.532: INFO: Got endpoints: latency-svc-x9vl9 [2.167370106s]
May  6 18:04:18.559: INFO: Created: latency-svc-zbw7z
May  6 18:04:18.574: INFO: Got endpoints: latency-svc-zbw7z [1.742516198s]
May  6 18:04:18.592: INFO: Created: latency-svc-thdds
May  6 18:04:18.613: INFO: Got endpoints: latency-svc-thdds [1.745933216s]
May  6 18:04:18.659: INFO: Created: latency-svc-lrz4r
May  6 18:04:18.670: INFO: Got endpoints: latency-svc-lrz4r [1.640576081s]
May  6 18:04:18.700: INFO: Created: latency-svc-hblgj
May  6 18:04:18.713: INFO: Got endpoints: latency-svc-hblgj [1.372700484s]
May  6 18:04:18.739: INFO: Created: latency-svc-zjzjf
May  6 18:04:18.755: INFO: Got endpoints: latency-svc-zjzjf [1.328246272s]
May  6 18:04:18.802: INFO: Created: latency-svc-jzqmc
May  6 18:04:18.806: INFO: Got endpoints: latency-svc-jzqmc [1.253911179s]
May  6 18:04:18.862: INFO: Created: latency-svc-kmbnv
May  6 18:04:18.889: INFO: Got endpoints: latency-svc-kmbnv [1.29411015s]
May  6 18:04:18.995: INFO: Created: latency-svc-bwh78
May  6 18:04:18.998: INFO: Got endpoints: latency-svc-bwh78 [1.33147514s]
May  6 18:04:19.225: INFO: Created: latency-svc-zzkms
May  6 18:04:19.255: INFO: Got endpoints: latency-svc-zzkms [1.521535968s]
May  6 18:04:19.390: INFO: Created: latency-svc-cqkgk
May  6 18:04:19.394: INFO: Got endpoints: latency-svc-cqkgk [1.396216815s]
May  6 18:04:19.477: INFO: Created: latency-svc-ngx65
May  6 18:04:19.606: INFO: Got endpoints: latency-svc-ngx65 [1.38770461s]
May  6 18:04:19.608: INFO: Created: latency-svc-f5knw
May  6 18:04:19.628: INFO: Got endpoints: latency-svc-f5knw [1.329515191s]
May  6 18:04:19.682: INFO: Created: latency-svc-vd2q2
May  6 18:04:20.013: INFO: Got endpoints: latency-svc-vd2q2 [1.569131734s]
May  6 18:04:20.018: INFO: Created: latency-svc-qdblb
May  6 18:04:20.042: INFO: Got endpoints: latency-svc-qdblb [1.588444923s]
May  6 18:04:20.106: INFO: Created: latency-svc-k776f
May  6 18:04:20.228: INFO: Got endpoints: latency-svc-k776f [1.696178882s]
May  6 18:04:20.427: INFO: Created: latency-svc-lxnf2
May  6 18:04:20.491: INFO: Got endpoints: latency-svc-lxnf2 [1.917040983s]
May  6 18:04:20.713: INFO: Created: latency-svc-9sscc
May  6 18:04:20.731: INFO: Got endpoints: latency-svc-9sscc [2.117509459s]
May  6 18:04:20.763: INFO: Created: latency-svc-dg7rk
May  6 18:04:20.779: INFO: Got endpoints: latency-svc-dg7rk [2.108954222s]
May  6 18:04:20.802: INFO: Created: latency-svc-tx98x
May  6 18:04:20.862: INFO: Got endpoints: latency-svc-tx98x [2.1491901s]
May  6 18:04:20.902: INFO: Created: latency-svc-wlnwr
May  6 18:04:20.918: INFO: Got endpoints: latency-svc-wlnwr [2.163057297s]
May  6 18:04:20.956: INFO: Created: latency-svc-ffslw
May  6 18:04:21.096: INFO: Got endpoints: latency-svc-ffslw [2.290261894s]
May  6 18:04:21.098: INFO: Created: latency-svc-792bb
May  6 18:04:21.110: INFO: Got endpoints: latency-svc-792bb [2.221374796s]
May  6 18:04:21.157: INFO: Created: latency-svc-jmxlf
May  6 18:04:21.171: INFO: Got endpoints: latency-svc-jmxlf [2.172194968s]
May  6 18:04:21.196: INFO: Created: latency-svc-rtx54
May  6 18:04:21.258: INFO: Got endpoints: latency-svc-rtx54 [2.0034548s]
May  6 18:04:21.260: INFO: Created: latency-svc-6kzlj
May  6 18:04:21.266: INFO: Got endpoints: latency-svc-6kzlj [1.87258589s]
May  6 18:04:21.294: INFO: Created: latency-svc-6f5xw
May  6 18:04:21.309: INFO: Got endpoints: latency-svc-6f5xw [1.703160348s]
May  6 18:04:21.331: INFO: Created: latency-svc-8ptmb
May  6 18:04:21.346: INFO: Got endpoints: latency-svc-8ptmb [1.718306741s]
May  6 18:04:21.397: INFO: Created: latency-svc-572fw
May  6 18:04:21.416: INFO: Got endpoints: latency-svc-572fw [1.402883297s]
May  6 18:04:21.460: INFO: Created: latency-svc-p42x5
May  6 18:04:21.689: INFO: Got endpoints: latency-svc-p42x5 [1.647752076s]
May  6 18:04:22.216: INFO: Created: latency-svc-c8xtr
May  6 18:04:22.354: INFO: Got endpoints: latency-svc-c8xtr [2.125950765s]
May  6 18:04:22.524: INFO: Created: latency-svc-twb2g
May  6 18:04:23.056: INFO: Got endpoints: latency-svc-twb2g [2.565111825s]
May  6 18:04:23.391: INFO: Created: latency-svc-c6cbf
May  6 18:04:23.435: INFO: Got endpoints: latency-svc-c6cbf [2.704663811s]
May  6 18:04:23.750: INFO: Created: latency-svc-p2888
May  6 18:04:23.754: INFO: Got endpoints: latency-svc-p2888 [2.9750702s]
May  6 18:04:24.172: INFO: Created: latency-svc-5rxdd
May  6 18:04:24.324: INFO: Got endpoints: latency-svc-5rxdd [3.461459468s]
May  6 18:04:24.373: INFO: Created: latency-svc-v56fk
May  6 18:04:24.804: INFO: Got endpoints: latency-svc-v56fk [3.886513967s]
May  6 18:04:24.823: INFO: Created: latency-svc-k5bd8
May  6 18:04:24.878: INFO: Got endpoints: latency-svc-k5bd8 [3.781597211s]
May  6 18:04:25.061: INFO: Created: latency-svc-hb626
May  6 18:04:25.111: INFO: Got endpoints: latency-svc-hb626 [4.001231256s]
May  6 18:04:25.243: INFO: Created: latency-svc-gzcd2
May  6 18:04:25.258: INFO: Got endpoints: latency-svc-gzcd2 [4.086892635s]
May  6 18:04:25.306: INFO: Created: latency-svc-5p26v
May  6 18:04:25.474: INFO: Got endpoints: latency-svc-5p26v [4.215633669s]
May  6 18:04:25.476: INFO: Created: latency-svc-6k5zr
May  6 18:04:25.714: INFO: Got endpoints: latency-svc-6k5zr [4.447397072s]
May  6 18:04:25.716: INFO: Created: latency-svc-29brq
May  6 18:04:25.732: INFO: Got endpoints: latency-svc-29brq [4.422809902s]
May  6 18:04:26.116: INFO: Created: latency-svc-krbw5
May  6 18:04:26.276: INFO: Got endpoints: latency-svc-krbw5 [4.929812903s]
May  6 18:04:26.276: INFO: Latencies: [1.050238429s 1.149335038s 1.15083648s 1.253911179s 1.29411015s 1.328246272s 1.329515191s 1.33147514s 1.372700484s 1.38770461s 1.396216815s 1.402883297s 1.432145839s 1.437296418s 1.473622657s 1.482066169s 1.515095467s 1.521535968s 1.547147636s 1.553841051s 1.560782252s 1.569131734s 1.56926826s 1.587343191s 1.588444923s 1.596475253s 1.610525397s 1.617285858s 1.623007597s 1.631959339s 1.640576081s 1.647752076s 1.649967805s 1.65444213s 1.658427603s 1.665898318s 1.676578321s 1.682336901s 1.684572166s 1.694499497s 1.695505341s 1.696178882s 1.703160348s 1.718306741s 1.734715154s 1.742516198s 1.745933216s 1.754461059s 1.772551046s 1.777460607s 1.803841804s 1.846307405s 1.87258589s 1.886492787s 1.887115205s 1.890106002s 1.903286681s 1.915607795s 1.917040983s 1.938910176s 1.951216034s 1.95281693s 1.954287344s 1.957341818s 1.974658922s 1.976244039s 1.982984229s 2.00035328s 2.0034548s 2.003680513s 2.008581381s 2.028154529s 2.033422606s 2.035026276s 2.036515597s 2.040729599s 2.041730883s 2.054072534s 2.05942066s 2.066644345s 2.070780215s 2.071350395s 2.072322632s 2.07860004s 2.085950505s 2.088606074s 2.08913988s 2.090198145s 2.091290833s 2.093214506s 2.096714137s 2.098855924s 2.108954222s 2.109289334s 2.116716431s 2.117509459s 2.125735835s 2.125950765s 2.130314341s 2.130342111s 2.143927252s 2.147232467s 2.1491901s 2.152972881s 2.155784546s 2.163057297s 2.167370106s 2.172194968s 2.173452556s 2.197966078s 2.210803394s 2.221374796s 2.262330614s 2.267866485s 2.273162375s 2.290261894s 2.30031427s 2.305848896s 2.310788685s 2.321848512s 2.362728458s 2.371840113s 2.418009287s 2.418954886s 2.437062637s 2.443775464s 2.460521997s 2.522170214s 2.543697492s 2.565111825s 2.583459078s 2.60905386s 2.617779405s 2.635816613s 2.642679496s 2.663350753s 2.676857949s 2.694673441s 2.704663811s 2.714230811s 2.726947936s 2.754562062s 2.880843215s 2.886408414s 2.927708127s 2.933028003s 2.951945704s 2.9750702s 3.037388043s 3.061734749s 3.08243361s 3.089961493s 3.158734219s 3.164715098s 3.18588758s 3.187129056s 3.206589081s 3.231566689s 3.237447354s 3.249849722s 3.257349538s 3.276427233s 3.292492057s 3.308812993s 3.377133012s 3.461459468s 3.491349848s 3.619732425s 3.781597211s 3.802836388s 3.805557673s 3.87387703s 3.886513967s 3.894445629s 4.001231256s 4.086892635s 4.142935197s 4.165461176s 4.171754199s 4.179467828s 4.18595214s 4.186875401s 4.203156997s 4.215633669s 4.244994961s 4.255262613s 4.267382759s 4.270913702s 4.276405205s 4.277849056s 4.287500925s 4.304239583s 4.312186316s 4.349229535s 4.361399424s 4.378472156s 4.422809902s 4.447397072s 4.556528874s 4.929812903s]
May  6 18:04:26.276: INFO: 50 %ile: 2.143927252s
May  6 18:04:26.276: INFO: 90 %ile: 4.18595214s
May  6 18:04:26.276: INFO: 99 %ile: 4.556528874s
May  6 18:04:26.276: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:04:26.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-3811" for this suite.

• [SLOW TEST:45.739 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":275,"completed":93,"skipped":1598,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:04:26.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override all
May  6 18:04:26.500: INFO: Waiting up to 5m0s for pod "client-containers-4cc19ee6-2ab5-4bf8-80d2-c9cea245c81a" in namespace "containers-3584" to be "Succeeded or Failed"
May  6 18:04:26.576: INFO: Pod "client-containers-4cc19ee6-2ab5-4bf8-80d2-c9cea245c81a": Phase="Pending", Reason="", readiness=false. Elapsed: 75.559632ms
May  6 18:04:28.785: INFO: Pod "client-containers-4cc19ee6-2ab5-4bf8-80d2-c9cea245c81a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285297248s
May  6 18:04:30.800: INFO: Pod "client-containers-4cc19ee6-2ab5-4bf8-80d2-c9cea245c81a": Phase="Running", Reason="", readiness=true. Elapsed: 4.300267597s
May  6 18:04:32.813: INFO: Pod "client-containers-4cc19ee6-2ab5-4bf8-80d2-c9cea245c81a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.313412227s
STEP: Saw pod success
May  6 18:04:32.813: INFO: Pod "client-containers-4cc19ee6-2ab5-4bf8-80d2-c9cea245c81a" satisfied condition "Succeeded or Failed"
May  6 18:04:32.824: INFO: Trying to get logs from node kali-worker pod client-containers-4cc19ee6-2ab5-4bf8-80d2-c9cea245c81a container test-container: 
STEP: delete the pod
May  6 18:04:33.712: INFO: Waiting for pod client-containers-4cc19ee6-2ab5-4bf8-80d2-c9cea245c81a to disappear
May  6 18:04:34.114: INFO: Pod client-containers-4cc19ee6-2ab5-4bf8-80d2-c9cea245c81a no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:04:34.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3584" for this suite.

• [SLOW TEST:8.460 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1605,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:04:34.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:04:46.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3674" for this suite.
STEP: Destroying namespace "nsdeletetest-2099" for this suite.
May  6 18:04:46.581: INFO: Namespace nsdeletetest-2099 was already deleted
STEP: Destroying namespace "nsdeletetest-6333" for this suite.

• [SLOW TEST:12.067 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":95,"skipped":1620,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:04:46.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-af0dd223-1655-466c-8ea2-ef43a0a9b034
STEP: Creating secret with name s-test-opt-upd-e974f55e-fbcf-4a5d-8b46-32b171ec2a47
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-af0dd223-1655-466c-8ea2-ef43a0a9b034
STEP: Updating secret s-test-opt-upd-e974f55e-fbcf-4a5d-8b46-32b171ec2a47
STEP: Creating secret with name s-test-opt-create-ab6e9038-80dc-4b9e-9d72-ceb5b20181bd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:06:06.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-41" for this suite.

• [SLOW TEST:79.891 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1626,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:06:06.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-4e753615-878c-45c2-8548-25caa429b0c3
STEP: Creating a pod to test consume configMaps
May  6 18:06:06.981: INFO: Waiting up to 5m0s for pod "pod-configmaps-b8714b0f-36c5-4028-8f3a-e1e88f221e7e" in namespace "configmap-5180" to be "Succeeded or Failed"
May  6 18:06:06.998: INFO: Pod "pod-configmaps-b8714b0f-36c5-4028-8f3a-e1e88f221e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.831466ms
May  6 18:06:09.003: INFO: Pod "pod-configmaps-b8714b0f-36c5-4028-8f3a-e1e88f221e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021536087s
May  6 18:06:11.085: INFO: Pod "pod-configmaps-b8714b0f-36c5-4028-8f3a-e1e88f221e7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103924908s
May  6 18:06:13.112: INFO: Pod "pod-configmaps-b8714b0f-36c5-4028-8f3a-e1e88f221e7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130812671s
STEP: Saw pod success
May  6 18:06:13.112: INFO: Pod "pod-configmaps-b8714b0f-36c5-4028-8f3a-e1e88f221e7e" satisfied condition "Succeeded or Failed"
May  6 18:06:13.142: INFO: Trying to get logs from node kali-worker pod pod-configmaps-b8714b0f-36c5-4028-8f3a-e1e88f221e7e container configmap-volume-test: 
STEP: delete the pod
May  6 18:06:13.390: INFO: Waiting for pod pod-configmaps-b8714b0f-36c5-4028-8f3a-e1e88f221e7e to disappear
May  6 18:06:13.399: INFO: Pod pod-configmaps-b8714b0f-36c5-4028-8f3a-e1e88f221e7e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:06:13.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5180" for this suite.

• [SLOW TEST:6.656 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1635,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:06:13.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-725f26fa-5eb6-41be-b32f-5322d2f3b0bb
STEP: Creating a pod to test consume configMaps
May  6 18:06:14.675: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b1337f74-bd54-4fa4-a731-d5598eb79541" in namespace "projected-2038" to be "Succeeded or Failed"
May  6 18:06:15.289: INFO: Pod "pod-projected-configmaps-b1337f74-bd54-4fa4-a731-d5598eb79541": Phase="Pending", Reason="", readiness=false. Elapsed: 613.459536ms
May  6 18:06:17.414: INFO: Pod "pod-projected-configmaps-b1337f74-bd54-4fa4-a731-d5598eb79541": Phase="Pending", Reason="", readiness=false. Elapsed: 2.738670469s
May  6 18:06:19.536: INFO: Pod "pod-projected-configmaps-b1337f74-bd54-4fa4-a731-d5598eb79541": Phase="Pending", Reason="", readiness=false. Elapsed: 4.861242985s
May  6 18:06:21.667: INFO: Pod "pod-projected-configmaps-b1337f74-bd54-4fa4-a731-d5598eb79541": Phase="Pending", Reason="", readiness=false. Elapsed: 6.992301256s
May  6 18:06:23.761: INFO: Pod "pod-projected-configmaps-b1337f74-bd54-4fa4-a731-d5598eb79541": Phase="Running", Reason="", readiness=true. Elapsed: 9.085879395s
May  6 18:06:25.765: INFO: Pod "pod-projected-configmaps-b1337f74-bd54-4fa4-a731-d5598eb79541": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.09031874s
STEP: Saw pod success
May  6 18:06:25.765: INFO: Pod "pod-projected-configmaps-b1337f74-bd54-4fa4-a731-d5598eb79541" satisfied condition "Succeeded or Failed"
May  6 18:06:25.768: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-b1337f74-bd54-4fa4-a731-d5598eb79541 container projected-configmap-volume-test: 
STEP: delete the pod
May  6 18:06:25.869: INFO: Waiting for pod pod-projected-configmaps-b1337f74-bd54-4fa4-a731-d5598eb79541 to disappear
May  6 18:06:25.871: INFO: Pod pod-projected-configmaps-b1337f74-bd54-4fa4-a731-d5598eb79541 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:06:25.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2038" for this suite.

• [SLOW TEST:12.485 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":98,"skipped":1661,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:06:25.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-698a91fd-a186-4a3f-b639-b5f831448902 in namespace container-probe-627
May  6 18:06:34.204: INFO: Started pod test-webserver-698a91fd-a186-4a3f-b639-b5f831448902 in namespace container-probe-627
STEP: checking the pod's current state and verifying that restartCount is present
May  6 18:06:34.207: INFO: Initial restart count of pod test-webserver-698a91fd-a186-4a3f-b639-b5f831448902 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:10:35.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-627" for this suite.

• [SLOW TEST:249.923 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1663,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:10:35.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
May  6 18:10:36.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:10:54.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2135" for this suite.

• [SLOW TEST:18.737 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":100,"skipped":1689,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:10:54.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9530.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9530.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9530.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9530.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  6 18:11:03.585: INFO: DNS probes using dns-test-72a9a618-a21f-4758-a794-c8a3675283d7 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9530.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9530.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9530.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9530.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  6 18:11:12.806: INFO: File wheezy_udp@dns-test-service-3.dns-9530.svc.cluster.local from pod  dns-9530/dns-test-12d55bb9-af4e-4fd2-b1b8-06e2cb3dcfca contains 'foo.example.com.
' instead of 'bar.example.com.'
May  6 18:11:12.809: INFO: File jessie_udp@dns-test-service-3.dns-9530.svc.cluster.local from pod  dns-9530/dns-test-12d55bb9-af4e-4fd2-b1b8-06e2cb3dcfca contains 'foo.example.com.
' instead of 'bar.example.com.'
May  6 18:11:12.809: INFO: Lookups using dns-9530/dns-test-12d55bb9-af4e-4fd2-b1b8-06e2cb3dcfca failed for: [wheezy_udp@dns-test-service-3.dns-9530.svc.cluster.local jessie_udp@dns-test-service-3.dns-9530.svc.cluster.local]

May  6 18:11:18.208: INFO: File wheezy_udp@dns-test-service-3.dns-9530.svc.cluster.local from pod  dns-9530/dns-test-12d55bb9-af4e-4fd2-b1b8-06e2cb3dcfca contains 'foo.example.com.
' instead of 'bar.example.com.'
May  6 18:11:18.211: INFO: File jessie_udp@dns-test-service-3.dns-9530.svc.cluster.local from pod  dns-9530/dns-test-12d55bb9-af4e-4fd2-b1b8-06e2cb3dcfca contains 'foo.example.com.
' instead of 'bar.example.com.'
May  6 18:11:18.211: INFO: Lookups using dns-9530/dns-test-12d55bb9-af4e-4fd2-b1b8-06e2cb3dcfca failed for: [wheezy_udp@dns-test-service-3.dns-9530.svc.cluster.local jessie_udp@dns-test-service-3.dns-9530.svc.cluster.local]

May  6 18:11:22.814: INFO: File wheezy_udp@dns-test-service-3.dns-9530.svc.cluster.local from pod  dns-9530/dns-test-12d55bb9-af4e-4fd2-b1b8-06e2cb3dcfca contains 'foo.example.com.
' instead of 'bar.example.com.'
May  6 18:11:22.817: INFO: File jessie_udp@dns-test-service-3.dns-9530.svc.cluster.local from pod  dns-9530/dns-test-12d55bb9-af4e-4fd2-b1b8-06e2cb3dcfca contains 'foo.example.com.
' instead of 'bar.example.com.'
May  6 18:11:22.817: INFO: Lookups using dns-9530/dns-test-12d55bb9-af4e-4fd2-b1b8-06e2cb3dcfca failed for: [wheezy_udp@dns-test-service-3.dns-9530.svc.cluster.local jessie_udp@dns-test-service-3.dns-9530.svc.cluster.local]

May  6 18:11:27.940: INFO: File wheezy_udp@dns-test-service-3.dns-9530.svc.cluster.local from pod  dns-9530/dns-test-12d55bb9-af4e-4fd2-b1b8-06e2cb3dcfca contains 'foo.example.com.
' instead of 'bar.example.com.'
May  6 18:11:27.944: INFO: File jessie_udp@dns-test-service-3.dns-9530.svc.cluster.local from pod  dns-9530/dns-test-12d55bb9-af4e-4fd2-b1b8-06e2cb3dcfca contains 'foo.example.com.
' instead of 'bar.example.com.'
May  6 18:11:27.944: INFO: Lookups using dns-9530/dns-test-12d55bb9-af4e-4fd2-b1b8-06e2cb3dcfca failed for: [wheezy_udp@dns-test-service-3.dns-9530.svc.cluster.local jessie_udp@dns-test-service-3.dns-9530.svc.cluster.local]

May  6 18:11:32.821: INFO: DNS probes using dns-test-12d55bb9-af4e-4fd2-b1b8-06e2cb3dcfca succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9530.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9530.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9530.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9530.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  6 18:11:49.936: INFO: File wheezy_udp@dns-test-service-3.dns-9530.svc.cluster.local from pod  dns-9530/dns-test-38234050-f140-423f-b5dd-40f74171c127 contains '' instead of '10.108.50.139'
May  6 18:11:49.940: INFO: Lookups using dns-9530/dns-test-38234050-f140-423f-b5dd-40f74171c127 failed for: [wheezy_udp@dns-test-service-3.dns-9530.svc.cluster.local]

May  6 18:11:54.949: INFO: DNS probes using dns-test-38234050-f140-423f-b5dd-40f74171c127 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:11:55.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9530" for this suite.

• [SLOW TEST:60.915 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":101,"skipped":1696,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:11:55.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-3851653e-badc-495d-872a-2bdc55ed8114
STEP: Creating secret with name s-test-opt-upd-c0a8085a-a4aa-403a-9b02-047f38c5c0b3
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3851653e-badc-495d-872a-2bdc55ed8114
STEP: Updating secret s-test-opt-upd-c0a8085a-a4aa-403a-9b02-047f38c5c0b3
STEP: Creating secret with name s-test-opt-create-c6a8d8cf-b846-40cb-ac03-9cbc55ed77fd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:13:28.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1670" for this suite.

• [SLOW TEST:93.408 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1725,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:13:28.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-9b4546e3-071e-4d49-b41b-adfb95c6caba
STEP: Creating a pod to test consume secrets
May  6 18:13:29.018: INFO: Waiting up to 5m0s for pod "pod-secrets-45fe487f-5cbc-4e3a-bfd6-580451989ab0" in namespace "secrets-1521" to be "Succeeded or Failed"
May  6 18:13:29.025: INFO: Pod "pod-secrets-45fe487f-5cbc-4e3a-bfd6-580451989ab0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.667783ms
May  6 18:13:31.323: INFO: Pod "pod-secrets-45fe487f-5cbc-4e3a-bfd6-580451989ab0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305111858s
May  6 18:13:33.361: INFO: Pod "pod-secrets-45fe487f-5cbc-4e3a-bfd6-580451989ab0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34346974s
May  6 18:13:35.697: INFO: Pod "pod-secrets-45fe487f-5cbc-4e3a-bfd6-580451989ab0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.679273941s
STEP: Saw pod success
May  6 18:13:35.697: INFO: Pod "pod-secrets-45fe487f-5cbc-4e3a-bfd6-580451989ab0" satisfied condition "Succeeded or Failed"
May  6 18:13:35.701: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-45fe487f-5cbc-4e3a-bfd6-580451989ab0 container secret-volume-test: 
STEP: delete the pod
May  6 18:13:35.882: INFO: Waiting for pod pod-secrets-45fe487f-5cbc-4e3a-bfd6-580451989ab0 to disappear
May  6 18:13:35.899: INFO: Pod pod-secrets-45fe487f-5cbc-4e3a-bfd6-580451989ab0 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:13:35.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1521" for this suite.

• [SLOW TEST:7.153 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1774,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:13:36.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  6 18:13:36.657: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5356a447-4c71-4bd5-a1a4-d1773738b5f1" in namespace "downward-api-8491" to be "Succeeded or Failed"
May  6 18:13:36.690: INFO: Pod "downwardapi-volume-5356a447-4c71-4bd5-a1a4-d1773738b5f1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.429707ms
May  6 18:13:38.776: INFO: Pod "downwardapi-volume-5356a447-4c71-4bd5-a1a4-d1773738b5f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118473413s
May  6 18:13:40.779: INFO: Pod "downwardapi-volume-5356a447-4c71-4bd5-a1a4-d1773738b5f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121932045s
May  6 18:13:42.783: INFO: Pod "downwardapi-volume-5356a447-4c71-4bd5-a1a4-d1773738b5f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126027538s
May  6 18:13:44.788: INFO: Pod "downwardapi-volume-5356a447-4c71-4bd5-a1a4-d1773738b5f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.130819734s
STEP: Saw pod success
May  6 18:13:44.788: INFO: Pod "downwardapi-volume-5356a447-4c71-4bd5-a1a4-d1773738b5f1" satisfied condition "Succeeded or Failed"
May  6 18:13:44.791: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-5356a447-4c71-4bd5-a1a4-d1773738b5f1 container client-container: 
STEP: delete the pod
May  6 18:13:44.949: INFO: Waiting for pod downwardapi-volume-5356a447-4c71-4bd5-a1a4-d1773738b5f1 to disappear
May  6 18:13:45.110: INFO: Pod downwardapi-volume-5356a447-4c71-4bd5-a1a4-d1773738b5f1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:13:45.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8491" for this suite.

• [SLOW TEST:9.138 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1779,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:13:45.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:13:45.774: INFO: Create a RollingUpdate DaemonSet
May  6 18:13:45.778: INFO: Check that daemon pods launch on every node of the cluster
May  6 18:13:45.794: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:13:45.810: INFO: Number of nodes with available pods: 0
May  6 18:13:45.810: INFO: Node kali-worker is running more than one daemon pod
May  6 18:13:46.816: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:13:46.820: INFO: Number of nodes with available pods: 0
May  6 18:13:46.820: INFO: Node kali-worker is running more than one daemon pod
May  6 18:13:48.003: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:13:48.261: INFO: Number of nodes with available pods: 0
May  6 18:13:48.261: INFO: Node kali-worker is running more than one daemon pod
May  6 18:13:48.834: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:13:48.838: INFO: Number of nodes with available pods: 0
May  6 18:13:48.838: INFO: Node kali-worker is running more than one daemon pod
May  6 18:13:50.016: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:13:50.020: INFO: Number of nodes with available pods: 0
May  6 18:13:50.020: INFO: Node kali-worker is running more than one daemon pod
May  6 18:13:51.022: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:13:51.026: INFO: Number of nodes with available pods: 1
May  6 18:13:51.026: INFO: Node kali-worker is running more than one daemon pod
May  6 18:13:51.841: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:13:51.888: INFO: Number of nodes with available pods: 1
May  6 18:13:51.888: INFO: Node kali-worker is running more than one daemon pod
May  6 18:13:52.818: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:13:52.822: INFO: Number of nodes with available pods: 2
May  6 18:13:52.822: INFO: Number of running nodes: 2, number of available pods: 2
May  6 18:13:52.822: INFO: Update the DaemonSet to trigger a rollout
May  6 18:13:52.829: INFO: Updating DaemonSet daemon-set
May  6 18:14:04.118: INFO: Roll back the DaemonSet before rollout is complete
May  6 18:14:04.309: INFO: Updating DaemonSet daemon-set
May  6 18:14:04.309: INFO: Make sure DaemonSet rollback is complete
May  6 18:14:04.864: INFO: Wrong image for pod: daemon-set-q5866. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  6 18:14:04.864: INFO: Pod daemon-set-q5866 is not available
May  6 18:14:04.867: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:14:05.908: INFO: Wrong image for pod: daemon-set-q5866. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  6 18:14:05.908: INFO: Pod daemon-set-q5866 is not available
May  6 18:14:06.362: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:14:07.100: INFO: Wrong image for pod: daemon-set-q5866. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  6 18:14:07.100: INFO: Pod daemon-set-q5866 is not available
May  6 18:14:07.104: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:14:08.459: INFO: Wrong image for pod: daemon-set-q5866. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  6 18:14:08.460: INFO: Pod daemon-set-q5866 is not available
May  6 18:14:08.464: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:14:09.907: INFO: Wrong image for pod: daemon-set-q5866. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  6 18:14:09.907: INFO: Pod daemon-set-q5866 is not available
May  6 18:14:09.912: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:14:10.896: INFO: Wrong image for pod: daemon-set-q5866. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  6 18:14:10.896: INFO: Pod daemon-set-q5866 is not available
May  6 18:14:10.900: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:14:12.140: INFO: Pod daemon-set-7v9z7 is not available
May  6 18:14:12.531: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-223, will wait for the garbage collector to delete the pods
May  6 18:14:14.207: INFO: Deleting DaemonSet.extensions daemon-set took: 1.559848308s
May  6 18:14:15.507: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.300253074s
May  6 18:14:20.010: INFO: Number of nodes with available pods: 0
May  6 18:14:20.010: INFO: Number of running nodes: 0, number of available pods: 0
May  6 18:14:20.044: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-223/daemonsets","resourceVersion":"2060682"},"items":null}

May  6 18:14:20.047: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-223/pods","resourceVersion":"2060682"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:14:20.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-223" for this suite.

• [SLOW TEST:34.857 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":105,"skipped":1790,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:14:20.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  6 18:14:20.142: INFO: Waiting up to 5m0s for pod "downwardapi-volume-15b867b5-2852-40f0-a861-357fbc9a5e7d" in namespace "downward-api-3300" to be "Succeeded or Failed"
May  6 18:14:20.193: INFO: Pod "downwardapi-volume-15b867b5-2852-40f0-a861-357fbc9a5e7d": Phase="Pending", Reason="", readiness=false. Elapsed: 51.605451ms
May  6 18:14:22.198: INFO: Pod "downwardapi-volume-15b867b5-2852-40f0-a861-357fbc9a5e7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056623123s
May  6 18:14:24.204: INFO: Pod "downwardapi-volume-15b867b5-2852-40f0-a861-357fbc9a5e7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06282193s
May  6 18:14:26.320: INFO: Pod "downwardapi-volume-15b867b5-2852-40f0-a861-357fbc9a5e7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.178083695s
STEP: Saw pod success
May  6 18:14:26.320: INFO: Pod "downwardapi-volume-15b867b5-2852-40f0-a861-357fbc9a5e7d" satisfied condition "Succeeded or Failed"
May  6 18:14:26.505: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-15b867b5-2852-40f0-a861-357fbc9a5e7d container client-container: 
STEP: delete the pod
May  6 18:14:26.818: INFO: Waiting for pod downwardapi-volume-15b867b5-2852-40f0-a861-357fbc9a5e7d to disappear
May  6 18:14:26.865: INFO: Pod downwardapi-volume-15b867b5-2852-40f0-a861-357fbc9a5e7d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:14:26.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3300" for this suite.

• [SLOW TEST:7.003 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":106,"skipped":1799,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:14:27.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:14:27.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-860" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":107,"skipped":1810,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:14:27.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-4139/configmap-test-9a774406-5941-4557-bb5d-cb9848e557dd
STEP: Creating a pod to test consume configMaps
May  6 18:14:27.831: INFO: Waiting up to 5m0s for pod "pod-configmaps-5d6b28c6-7f6e-4d8c-badf-bc6be367e636" in namespace "configmap-4139" to be "Succeeded or Failed"
May  6 18:14:27.890: INFO: Pod "pod-configmaps-5d6b28c6-7f6e-4d8c-badf-bc6be367e636": Phase="Pending", Reason="", readiness=false. Elapsed: 58.64613ms
May  6 18:14:29.966: INFO: Pod "pod-configmaps-5d6b28c6-7f6e-4d8c-badf-bc6be367e636": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135159729s
May  6 18:14:32.020: INFO: Pod "pod-configmaps-5d6b28c6-7f6e-4d8c-badf-bc6be367e636": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188763202s
May  6 18:14:34.023: INFO: Pod "pod-configmaps-5d6b28c6-7f6e-4d8c-badf-bc6be367e636": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.192529396s
STEP: Saw pod success
May  6 18:14:34.024: INFO: Pod "pod-configmaps-5d6b28c6-7f6e-4d8c-badf-bc6be367e636" satisfied condition "Succeeded or Failed"
May  6 18:14:34.026: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-5d6b28c6-7f6e-4d8c-badf-bc6be367e636 container env-test: 
STEP: delete the pod
May  6 18:14:34.053: INFO: Waiting for pod pod-configmaps-5d6b28c6-7f6e-4d8c-badf-bc6be367e636 to disappear
May  6 18:14:34.056: INFO: Pod pod-configmaps-5d6b28c6-7f6e-4d8c-badf-bc6be367e636 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:14:34.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4139" for this suite.

• [SLOW TEST:6.466 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":1837,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:14:34.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-7dee9a01-5f78-44c5-b4b2-8f3d53576fbb
STEP: Creating configMap with name cm-test-opt-upd-96e0e849-eeda-4f66-a828-e11ba60ff8f3
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-7dee9a01-5f78-44c5-b4b2-8f3d53576fbb
STEP: Updating configmap cm-test-opt-upd-96e0e849-eeda-4f66-a828-e11ba60ff8f3
STEP: Creating configMap with name cm-test-opt-create-3966501c-a004-4674-a66b-abc94f97afe9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:15:51.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7209" for this suite.

• [SLOW TEST:77.831 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":1845,"failed":0}
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:15:51.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
May  6 18:16:02.120: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May  6 18:16:02.141: INFO: Pod pod-with-poststart-exec-hook still exists
May  6 18:16:04.142: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May  6 18:16:04.464: INFO: Pod pod-with-poststart-exec-hook still exists
May  6 18:16:06.142: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May  6 18:16:06.151: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:16:06.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3255" for this suite.

• [SLOW TEST:14.235 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":1848,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:16:06.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:16:11.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5187" for this suite.

• [SLOW TEST:5.531 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":111,"skipped":1857,"failed":0}
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:16:11.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May  6 18:16:11.899: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May  6 18:16:11.952: INFO: Waiting for terminating namespaces to be deleted...
May  6 18:16:11.955: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May  6 18:16:11.960: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  6 18:16:11.961: INFO: 	Container kube-proxy ready: true, restart count 0
May  6 18:16:11.961: INFO: pod-handle-http-request from container-lifecycle-hook-3255 started at 2020-05-06 18:15:52 +0000 UTC (1 container statuses recorded)
May  6 18:16:11.961: INFO: 	Container pod-handle-http-request ready: true, restart count 0
May  6 18:16:11.961: INFO: pod-adoption from replication-controller-5187 started at 2020-05-06 18:16:06 +0000 UTC (1 container statuses recorded)
May  6 18:16:11.961: INFO: 	Container pod-adoption ready: true, restart count 0
May  6 18:16:11.961: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  6 18:16:11.961: INFO: 	Container kindnet-cni ready: true, restart count 1
May  6 18:16:11.961: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May  6 18:16:11.965: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  6 18:16:11.965: INFO: 	Container kindnet-cni ready: true, restart count 0
May  6 18:16:11.965: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  6 18:16:11.965: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-ae6d9a9c-5cfa-4c4e-b96f-beefa7402bc3 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-ae6d9a9c-5cfa-4c4e-b96f-beefa7402bc3 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-ae6d9a9c-5cfa-4c4e-b96f-beefa7402bc3
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:21:20.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6587" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:309.808 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":112,"skipped":1866,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:21:21.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  6 18:21:25.461: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  6 18:21:28.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386086, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386086, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386086, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386085, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:21:31.595: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386086, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386086, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386086, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386085, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:21:33.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386086, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386086, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386086, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386085, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:21:35.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386086, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386086, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386086, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386085, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  6 18:21:38.086: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:21:38.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-740" for this suite.
STEP: Destroying namespace "webhook-740-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.618 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":113,"skipped":1871,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:21:42.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2748 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2748;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2748 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2748;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2748.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2748.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2748.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2748.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2748.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2748.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2748.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2748.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2748.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2748.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2748.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2748.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2748.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 100.214.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.214.100_udp@PTR;check="$$(dig +tcp +noall +answer +search 100.214.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.214.100_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2748 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2748;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2748 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2748;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2748.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2748.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2748.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2748.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2748.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2748.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2748.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2748.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2748.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2748.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2748.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2748.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2748.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 100.214.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.214.100_udp@PTR;check="$$(dig +tcp +noall +answer +search 100.214.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.214.100_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  6 18:21:55.358: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:21:55.361: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:21:55.363: INFO: Unable to read wheezy_udp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:21:55.439: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:21:55.442: INFO: Unable to read wheezy_udp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:21:55.446: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:21:55.448: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:21:55.452: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:21:55.472: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:21:55.474: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:21:55.476: INFO: Unable to read jessie_udp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:21:55.479: INFO: Unable to read jessie_tcp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:21:55.481: INFO: Unable to read jessie_udp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:21:55.484: INFO: Unable to read jessie_tcp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:21:55.486: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:21:55.488: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:21:55.521: INFO: Lookups using dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2748 wheezy_tcp@dns-test-service.dns-2748 wheezy_udp@dns-test-service.dns-2748.svc wheezy_tcp@dns-test-service.dns-2748.svc wheezy_udp@_http._tcp.dns-test-service.dns-2748.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2748.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2748 jessie_tcp@dns-test-service.dns-2748 jessie_udp@dns-test-service.dns-2748.svc jessie_tcp@dns-test-service.dns-2748.svc jessie_udp@_http._tcp.dns-test-service.dns-2748.svc jessie_tcp@_http._tcp.dns-test-service.dns-2748.svc]

May  6 18:22:00.525: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:00.528: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:00.531: INFO: Unable to read wheezy_udp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:00.533: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:00.536: INFO: Unable to read wheezy_udp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:00.539: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:00.542: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:00.544: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:00.582: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:00.585: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:00.595: INFO: Unable to read jessie_udp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:00.598: INFO: Unable to read jessie_tcp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:00.600: INFO: Unable to read jessie_udp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:00.602: INFO: Unable to read jessie_tcp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:00.604: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:00.607: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:00.621: INFO: Lookups using dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2748 wheezy_tcp@dns-test-service.dns-2748 wheezy_udp@dns-test-service.dns-2748.svc wheezy_tcp@dns-test-service.dns-2748.svc wheezy_udp@_http._tcp.dns-test-service.dns-2748.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2748.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2748 jessie_tcp@dns-test-service.dns-2748 jessie_udp@dns-test-service.dns-2748.svc jessie_tcp@dns-test-service.dns-2748.svc jessie_udp@_http._tcp.dns-test-service.dns-2748.svc jessie_tcp@_http._tcp.dns-test-service.dns-2748.svc]

May  6 18:22:05.600: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:05.984: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:05.988: INFO: Unable to read wheezy_udp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:06.307: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:06.327: INFO: Unable to read wheezy_udp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:06.351: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:06.601: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:06.698: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:06.838: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:06.840: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:06.843: INFO: Unable to read jessie_udp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:06.845: INFO: Unable to read jessie_tcp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:06.848: INFO: Unable to read jessie_udp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:06.851: INFO: Unable to read jessie_tcp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:06.854: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:06.856: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:06.869: INFO: Lookups using dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2748 wheezy_tcp@dns-test-service.dns-2748 wheezy_udp@dns-test-service.dns-2748.svc wheezy_tcp@dns-test-service.dns-2748.svc wheezy_udp@_http._tcp.dns-test-service.dns-2748.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2748.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2748 jessie_tcp@dns-test-service.dns-2748 jessie_udp@dns-test-service.dns-2748.svc jessie_tcp@dns-test-service.dns-2748.svc jessie_udp@_http._tcp.dns-test-service.dns-2748.svc jessie_tcp@_http._tcp.dns-test-service.dns-2748.svc]

May  6 18:22:10.526: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:10.529: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:10.532: INFO: Unable to read wheezy_udp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:10.535: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:10.539: INFO: Unable to read wheezy_udp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:10.541: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:10.544: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:10.546: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:10.562: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:10.565: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:10.567: INFO: Unable to read jessie_udp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:10.570: INFO: Unable to read jessie_tcp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:10.573: INFO: Unable to read jessie_udp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:10.576: INFO: Unable to read jessie_tcp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:10.580: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:10.583: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:10.608: INFO: Lookups using dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2748 wheezy_tcp@dns-test-service.dns-2748 wheezy_udp@dns-test-service.dns-2748.svc wheezy_tcp@dns-test-service.dns-2748.svc wheezy_udp@_http._tcp.dns-test-service.dns-2748.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2748.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2748 jessie_tcp@dns-test-service.dns-2748 jessie_udp@dns-test-service.dns-2748.svc jessie_tcp@dns-test-service.dns-2748.svc jessie_udp@_http._tcp.dns-test-service.dns-2748.svc jessie_tcp@_http._tcp.dns-test-service.dns-2748.svc]

May  6 18:22:15.559: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:15.563: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:15.612: INFO: Unable to read wheezy_udp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:15.616: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:15.629: INFO: Unable to read wheezy_udp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:15.632: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:15.739: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:15.742: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:15.761: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:15.764: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:15.767: INFO: Unable to read jessie_udp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:15.769: INFO: Unable to read jessie_tcp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:15.773: INFO: Unable to read jessie_udp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:15.776: INFO: Unable to read jessie_tcp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:15.780: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:15.783: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:15.804: INFO: Lookups using dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2748 wheezy_tcp@dns-test-service.dns-2748 wheezy_udp@dns-test-service.dns-2748.svc wheezy_tcp@dns-test-service.dns-2748.svc wheezy_udp@_http._tcp.dns-test-service.dns-2748.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2748.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2748 jessie_tcp@dns-test-service.dns-2748 jessie_udp@dns-test-service.dns-2748.svc jessie_tcp@dns-test-service.dns-2748.svc jessie_udp@_http._tcp.dns-test-service.dns-2748.svc jessie_tcp@_http._tcp.dns-test-service.dns-2748.svc]

May  6 18:22:20.547: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:20.550: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:20.552: INFO: Unable to read wheezy_udp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:20.554: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:20.556: INFO: Unable to read wheezy_udp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:20.558: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:20.560: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:20.562: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:20.577: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:20.579: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:20.581: INFO: Unable to read jessie_udp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:20.584: INFO: Unable to read jessie_tcp@dns-test-service.dns-2748 from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:20.587: INFO: Unable to read jessie_udp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:20.590: INFO: Unable to read jessie_tcp@dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:20.592: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:20.596: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2748.svc from pod dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59: the server could not find the requested resource (get pods dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59)
May  6 18:22:20.613: INFO: Lookups using dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2748 wheezy_tcp@dns-test-service.dns-2748 wheezy_udp@dns-test-service.dns-2748.svc wheezy_tcp@dns-test-service.dns-2748.svc wheezy_udp@_http._tcp.dns-test-service.dns-2748.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2748.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2748 jessie_tcp@dns-test-service.dns-2748 jessie_udp@dns-test-service.dns-2748.svc jessie_tcp@dns-test-service.dns-2748.svc jessie_udp@_http._tcp.dns-test-service.dns-2748.svc jessie_tcp@_http._tcp.dns-test-service.dns-2748.svc]

May  6 18:22:25.636: INFO: DNS probes using dns-2748/dns-test-b97b84bc-0d8e-4ce6-b45b-3f62599a7b59 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:22:26.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2748" for this suite.

• [SLOW TEST:44.826 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":114,"skipped":1887,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:22:26.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-f1728dfc-31ab-4618-a15a-f6234ed8c267
STEP: Creating a pod to test consume configMaps
May  6 18:22:27.125: INFO: Waiting up to 5m0s for pod "pod-configmaps-45dcb196-0db1-4353-b111-7b7bde5de1c9" in namespace "configmap-5480" to be "Succeeded or Failed"
May  6 18:22:27.141: INFO: Pod "pod-configmaps-45dcb196-0db1-4353-b111-7b7bde5de1c9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.864559ms
May  6 18:22:29.145: INFO: Pod "pod-configmaps-45dcb196-0db1-4353-b111-7b7bde5de1c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019483549s
May  6 18:22:31.148: INFO: Pod "pod-configmaps-45dcb196-0db1-4353-b111-7b7bde5de1c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023024343s
May  6 18:22:33.152: INFO: Pod "pod-configmaps-45dcb196-0db1-4353-b111-7b7bde5de1c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026427782s
STEP: Saw pod success
May  6 18:22:33.152: INFO: Pod "pod-configmaps-45dcb196-0db1-4353-b111-7b7bde5de1c9" satisfied condition "Succeeded or Failed"
May  6 18:22:33.154: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-45dcb196-0db1-4353-b111-7b7bde5de1c9 container configmap-volume-test: 
STEP: delete the pod
May  6 18:22:33.217: INFO: Waiting for pod pod-configmaps-45dcb196-0db1-4353-b111-7b7bde5de1c9 to disappear
May  6 18:22:33.312: INFO: Pod pod-configmaps-45dcb196-0db1-4353-b111-7b7bde5de1c9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:22:33.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5480" for this suite.

• [SLOW TEST:6.376 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":115,"skipped":1888,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:22:33.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service endpoint-test2 in namespace services-3023
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3023 to expose endpoints map[]
May  6 18:22:33.600: INFO: successfully validated that service endpoint-test2 in namespace services-3023 exposes endpoints map[] (63.49324ms elapsed)
STEP: Creating pod pod1 in namespace services-3023
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3023 to expose endpoints map[pod1:[80]]
May  6 18:22:36.718: INFO: successfully validated that service endpoint-test2 in namespace services-3023 exposes endpoints map[pod1:[80]] (3.075915822s elapsed)
STEP: Creating pod pod2 in namespace services-3023
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3023 to expose endpoints map[pod1:[80] pod2:[80]]
May  6 18:22:41.333: INFO: Unexpected endpoints: found map[c6ad17fb-b360-498d-aa62-b4b5ef9e2377:[80]], expected map[pod1:[80] pod2:[80]] (4.610959203s elapsed, will retry)
May  6 18:22:43.541: INFO: successfully validated that service endpoint-test2 in namespace services-3023 exposes endpoints map[pod1:[80] pod2:[80]] (6.818672488s elapsed)
STEP: Deleting pod pod1 in namespace services-3023
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3023 to expose endpoints map[pod2:[80]]
May  6 18:22:45.729: INFO: successfully validated that service endpoint-test2 in namespace services-3023 exposes endpoints map[pod2:[80]] (2.185283782s elapsed)
STEP: Deleting pod pod2 in namespace services-3023
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3023 to expose endpoints map[]
May  6 18:22:47.211: INFO: successfully validated that service endpoint-test2 in namespace services-3023 exposes endpoints map[] (1.401777808s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:22:47.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3023" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:15.015 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":275,"completed":116,"skipped":1892,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:22:48.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
May  6 18:22:50.936: INFO: Waiting up to 5m0s for pod "pod-ccf4aedc-cf33-4be0-9aa4-c7dc5b04d3d4" in namespace "emptydir-6476" to be "Succeeded or Failed"
May  6 18:22:51.493: INFO: Pod "pod-ccf4aedc-cf33-4be0-9aa4-c7dc5b04d3d4": Phase="Pending", Reason="", readiness=false. Elapsed: 556.587105ms
May  6 18:22:53.670: INFO: Pod "pod-ccf4aedc-cf33-4be0-9aa4-c7dc5b04d3d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.733559798s
May  6 18:22:55.732: INFO: Pod "pod-ccf4aedc-cf33-4be0-9aa4-c7dc5b04d3d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.796321473s
May  6 18:22:57.736: INFO: Pod "pod-ccf4aedc-cf33-4be0-9aa4-c7dc5b04d3d4": Phase="Running", Reason="", readiness=true. Elapsed: 6.800010819s
May  6 18:22:59.741: INFO: Pod "pod-ccf4aedc-cf33-4be0-9aa4-c7dc5b04d3d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.804483335s
STEP: Saw pod success
May  6 18:22:59.741: INFO: Pod "pod-ccf4aedc-cf33-4be0-9aa4-c7dc5b04d3d4" satisfied condition "Succeeded or Failed"
May  6 18:22:59.744: INFO: Trying to get logs from node kali-worker pod pod-ccf4aedc-cf33-4be0-9aa4-c7dc5b04d3d4 container test-container: 
STEP: delete the pod
May  6 18:22:59.822: INFO: Waiting for pod pod-ccf4aedc-cf33-4be0-9aa4-c7dc5b04d3d4 to disappear
May  6 18:22:59.825: INFO: Pod pod-ccf4aedc-cf33-4be0-9aa4-c7dc5b04d3d4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:22:59.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6476" for this suite.

• [SLOW TEST:11.497 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":1893,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:22:59.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  6 18:23:01.063: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  6 18:23:03.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386181, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386181, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386181, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386181, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:23:05.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386181, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386181, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386181, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386181, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:23:07.755: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386181, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386181, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386181, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386181, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  6 18:23:10.828: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:23:10.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3226" for this suite.
STEP: Destroying namespace "webhook-3226-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.567 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":118,"skipped":1905,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:23:11.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May  6 18:23:17.597: INFO: Successfully updated pod "annotationupdate924e923b-a34f-4672-abec-7285e6b7164d"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:23:20.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5255" for this suite.

• [SLOW TEST:8.817 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":1916,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:23:20.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3803
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3803
STEP: creating replication controller externalsvc in namespace services-3803
I0506 18:23:20.701760       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3803, replica count: 2
I0506 18:23:23.752178       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:23:26.752421       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:23:29.752659       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
May  6 18:23:30.359: INFO: Creating new exec pod
May  6 18:23:36.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-3803 execpodcfjs6 -- /bin/sh -x -c nslookup clusterip-service'
May  6 18:23:40.835: INFO: stderr: "I0506 18:23:40.738020    2339 log.go:172] (0xc000a1a630) (0xc0009f4280) Create stream\nI0506 18:23:40.738057    2339 log.go:172] (0xc000a1a630) (0xc0009f4280) Stream added, broadcasting: 1\nI0506 18:23:40.740915    2339 log.go:172] (0xc000a1a630) Reply frame received for 1\nI0506 18:23:40.740956    2339 log.go:172] (0xc000a1a630) (0xc000936000) Create stream\nI0506 18:23:40.740967    2339 log.go:172] (0xc000a1a630) (0xc000936000) Stream added, broadcasting: 3\nI0506 18:23:40.742172    2339 log.go:172] (0xc000a1a630) Reply frame received for 3\nI0506 18:23:40.742199    2339 log.go:172] (0xc000a1a630) (0xc0009f4320) Create stream\nI0506 18:23:40.742207    2339 log.go:172] (0xc000a1a630) (0xc0009f4320) Stream added, broadcasting: 5\nI0506 18:23:40.743246    2339 log.go:172] (0xc000a1a630) Reply frame received for 5\nI0506 18:23:40.813922    2339 log.go:172] (0xc000a1a630) Data frame received for 5\nI0506 18:23:40.813945    2339 log.go:172] (0xc0009f4320) (5) Data frame handling\nI0506 18:23:40.813957    2339 log.go:172] (0xc0009f4320) (5) Data frame sent\n+ nslookup clusterip-service\nI0506 18:23:40.820914    2339 log.go:172] (0xc000a1a630) Data frame received for 3\nI0506 18:23:40.820938    2339 log.go:172] (0xc000936000) (3) Data frame handling\nI0506 18:23:40.820954    2339 log.go:172] (0xc000936000) (3) Data frame sent\nI0506 18:23:40.822587    2339 log.go:172] (0xc000a1a630) Data frame received for 3\nI0506 18:23:40.822601    2339 log.go:172] (0xc000936000) (3) Data frame handling\nI0506 18:23:40.822609    2339 log.go:172] (0xc000936000) (3) Data frame sent\nI0506 18:23:40.823153    2339 log.go:172] (0xc000a1a630) Data frame received for 5\nI0506 18:23:40.823168    2339 log.go:172] (0xc0009f4320) (5) Data frame handling\nI0506 18:23:40.823562    2339 log.go:172] (0xc000a1a630) Data frame received for 3\nI0506 18:23:40.823610    2339 log.go:172] (0xc000936000) (3) Data frame handling\nI0506 18:23:40.825548    2339 log.go:172] (0xc000a1a630) Data frame received for 1\nI0506 18:23:40.825622    2339 log.go:172] (0xc0009f4280) (1) Data frame handling\nI0506 18:23:40.825662    2339 log.go:172] (0xc0009f4280) (1) Data frame sent\nI0506 18:23:40.825792    2339 log.go:172] (0xc000a1a630) (0xc0009f4280) Stream removed, broadcasting: 1\nI0506 18:23:40.826074    2339 log.go:172] (0xc000a1a630) Go away received\nI0506 18:23:40.826227    2339 log.go:172] (0xc000a1a630) (0xc0009f4280) Stream removed, broadcasting: 1\nI0506 18:23:40.826242    2339 log.go:172] (0xc000a1a630) (0xc000936000) Stream removed, broadcasting: 3\nI0506 18:23:40.826249    2339 log.go:172] (0xc000a1a630) (0xc0009f4320) Stream removed, broadcasting: 5\n"
May  6 18:23:40.836: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3803.svc.cluster.local\tcanonical name = externalsvc.services-3803.svc.cluster.local.\nName:\texternalsvc.services-3803.svc.cluster.local\nAddress: 10.107.209.28\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3803, will wait for the garbage collector to delete the pods
May  6 18:23:40.895: INFO: Deleting ReplicationController externalsvc took: 5.602497ms
May  6 18:23:41.295: INFO: Terminating ReplicationController externalsvc pods took: 400.237134ms
May  6 18:23:47.634: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:23:48.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3803" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:28.317 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":120,"skipped":1932,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:23:48.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:23:49.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May  6 18:23:53.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6076 create -f -'
May  6 18:24:00.106: INFO: stderr: ""
May  6 18:24:00.106: INFO: stdout: "e2e-test-crd-publish-openapi-9778-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
May  6 18:24:00.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6076 delete e2e-test-crd-publish-openapi-9778-crds test-cr'
May  6 18:24:00.213: INFO: stderr: ""
May  6 18:24:00.213: INFO: stdout: "e2e-test-crd-publish-openapi-9778-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
May  6 18:24:00.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6076 apply -f -'
May  6 18:24:00.484: INFO: stderr: ""
May  6 18:24:00.484: INFO: stdout: "e2e-test-crd-publish-openapi-9778-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
May  6 18:24:00.485: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6076 delete e2e-test-crd-publish-openapi-9778-crds test-cr'
May  6 18:24:00.595: INFO: stderr: ""
May  6 18:24:00.595: INFO: stdout: "e2e-test-crd-publish-openapi-9778-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
May  6 18:24:00.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9778-crds'
May  6 18:24:00.864: INFO: stderr: ""
May  6 18:24:00.864: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9778-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:24:03.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6076" for this suite.

• [SLOW TEST:15.267 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":121,"skipped":1936,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:24:03.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-6288, will wait for the garbage collector to delete the pods
May  6 18:24:09.984: INFO: Deleting Job.batch foo took: 6.304736ms
May  6 18:24:10.184: INFO: Terminating Job.batch foo pods took: 200.25123ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:24:53.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6288" for this suite.

• [SLOW TEST:49.693 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":122,"skipped":1967,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:24:53.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-5ad998c2-7f0d-4c98-8404-e90d611ffab0
STEP: Creating a pod to test consume secrets
May  6 18:24:53.555: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e4179ccf-442b-4196-b06c-da724713e12d" in namespace "projected-4245" to be "Succeeded or Failed"
May  6 18:24:53.567: INFO: Pod "pod-projected-secrets-e4179ccf-442b-4196-b06c-da724713e12d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.828603ms
May  6 18:24:55.571: INFO: Pod "pod-projected-secrets-e4179ccf-442b-4196-b06c-da724713e12d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015965195s
May  6 18:24:57.575: INFO: Pod "pod-projected-secrets-e4179ccf-442b-4196-b06c-da724713e12d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020073304s
May  6 18:24:59.900: INFO: Pod "pod-projected-secrets-e4179ccf-442b-4196-b06c-da724713e12d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.34468204s
STEP: Saw pod success
May  6 18:24:59.900: INFO: Pod "pod-projected-secrets-e4179ccf-442b-4196-b06c-da724713e12d" satisfied condition "Succeeded or Failed"
May  6 18:24:59.903: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-e4179ccf-442b-4196-b06c-da724713e12d container projected-secret-volume-test: 
STEP: delete the pod
May  6 18:24:59.999: INFO: Waiting for pod pod-projected-secrets-e4179ccf-442b-4196-b06c-da724713e12d to disappear
May  6 18:25:00.146: INFO: Pod pod-projected-secrets-e4179ccf-442b-4196-b06c-da724713e12d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:25:00.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4245" for this suite.

• [SLOW TEST:6.655 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":2008,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:25:00.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-cnh9
STEP: Creating a pod to test atomic-volume-subpath
May  6 18:25:01.698: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cnh9" in namespace "subpath-9654" to be "Succeeded or Failed"
May  6 18:25:01.774: INFO: Pod "pod-subpath-test-configmap-cnh9": Phase="Pending", Reason="", readiness=false. Elapsed: 76.06031ms
May  6 18:25:03.778: INFO: Pod "pod-subpath-test-configmap-cnh9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080079666s
May  6 18:25:05.784: INFO: Pod "pod-subpath-test-configmap-cnh9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085643205s
May  6 18:25:07.787: INFO: Pod "pod-subpath-test-configmap-cnh9": Phase="Running", Reason="", readiness=true. Elapsed: 6.088626701s
May  6 18:25:10.138: INFO: Pod "pod-subpath-test-configmap-cnh9": Phase="Running", Reason="", readiness=true. Elapsed: 8.440146359s
May  6 18:25:12.337: INFO: Pod "pod-subpath-test-configmap-cnh9": Phase="Running", Reason="", readiness=true. Elapsed: 10.639076477s
May  6 18:25:14.433: INFO: Pod "pod-subpath-test-configmap-cnh9": Phase="Running", Reason="", readiness=true. Elapsed: 12.735333296s
May  6 18:25:16.655: INFO: Pod "pod-subpath-test-configmap-cnh9": Phase="Running", Reason="", readiness=true. Elapsed: 14.956692519s
May  6 18:25:18.659: INFO: Pod "pod-subpath-test-configmap-cnh9": Phase="Running", Reason="", readiness=true. Elapsed: 16.961206112s
May  6 18:25:20.664: INFO: Pod "pod-subpath-test-configmap-cnh9": Phase="Running", Reason="", readiness=true. Elapsed: 18.965625393s
May  6 18:25:22.668: INFO: Pod "pod-subpath-test-configmap-cnh9": Phase="Running", Reason="", readiness=true. Elapsed: 20.96983435s
May  6 18:25:24.684: INFO: Pod "pod-subpath-test-configmap-cnh9": Phase="Running", Reason="", readiness=true. Elapsed: 22.986392627s
May  6 18:25:26.707: INFO: Pod "pod-subpath-test-configmap-cnh9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.008444531s
STEP: Saw pod success
May  6 18:25:26.707: INFO: Pod "pod-subpath-test-configmap-cnh9" satisfied condition "Succeeded or Failed"
May  6 18:25:26.709: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-cnh9 container test-container-subpath-configmap-cnh9: 
STEP: delete the pod
May  6 18:25:26.863: INFO: Waiting for pod pod-subpath-test-configmap-cnh9 to disappear
May  6 18:25:26.871: INFO: Pod pod-subpath-test-configmap-cnh9 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-cnh9
May  6 18:25:26.871: INFO: Deleting pod "pod-subpath-test-configmap-cnh9" in namespace "subpath-9654"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:25:26.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9654" for this suite.

• [SLOW TEST:26.729 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":124,"skipped":2024,"failed":0}
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:25:26.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
May  6 18:25:27.506: INFO: created pod pod-service-account-defaultsa
May  6 18:25:27.506: INFO: pod pod-service-account-defaultsa service account token volume mount: true
May  6 18:25:27.515: INFO: created pod pod-service-account-mountsa
May  6 18:25:27.515: INFO: pod pod-service-account-mountsa service account token volume mount: true
May  6 18:25:27.534: INFO: created pod pod-service-account-nomountsa
May  6 18:25:27.534: INFO: pod pod-service-account-nomountsa service account token volume mount: false
May  6 18:25:27.545: INFO: created pod pod-service-account-defaultsa-mountspec
May  6 18:25:27.545: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
May  6 18:25:27.604: INFO: created pod pod-service-account-mountsa-mountspec
May  6 18:25:27.604: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
May  6 18:25:27.625: INFO: created pod pod-service-account-nomountsa-mountspec
May  6 18:25:27.625: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
May  6 18:25:27.650: INFO: created pod pod-service-account-defaultsa-nomountspec
May  6 18:25:27.650: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
May  6 18:25:27.679: INFO: created pod pod-service-account-mountsa-nomountspec
May  6 18:25:27.679: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
May  6 18:25:27.745: INFO: created pod pod-service-account-nomountsa-nomountspec
May  6 18:25:27.746: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:25:27.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5534" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":275,"completed":125,"skipped":2024,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:25:27.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  6 18:25:28.593: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  6 18:25:30.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386329, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:25:33.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386329, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:25:35.056: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386329, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:25:37.564: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386329, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:25:39.420: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386329, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:25:41.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386329, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:25:43.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386329, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386328, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  6 18:25:46.201: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:25:58.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4337" for this suite.
STEP: Destroying namespace "webhook-4337-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:30.905 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":126,"skipped":2026,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:25:58.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
May  6 18:25:58.899: INFO: Waiting up to 5m0s for pod "pod-88b8df89-370c-4ae7-be74-f3b5e764a703" in namespace "emptydir-6717" to be "Succeeded or Failed"
May  6 18:25:58.915: INFO: Pod "pod-88b8df89-370c-4ae7-be74-f3b5e764a703": Phase="Pending", Reason="", readiness=false. Elapsed: 15.677179ms
May  6 18:26:00.926: INFO: Pod "pod-88b8df89-370c-4ae7-be74-f3b5e764a703": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027574101s
May  6 18:26:02.931: INFO: Pod "pod-88b8df89-370c-4ae7-be74-f3b5e764a703": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031732736s
STEP: Saw pod success
May  6 18:26:02.931: INFO: Pod "pod-88b8df89-370c-4ae7-be74-f3b5e764a703" satisfied condition "Succeeded or Failed"
May  6 18:26:02.933: INFO: Trying to get logs from node kali-worker2 pod pod-88b8df89-370c-4ae7-be74-f3b5e764a703 container test-container: 
STEP: delete the pod
May  6 18:26:03.130: INFO: Waiting for pod pod-88b8df89-370c-4ae7-be74-f3b5e764a703 to disappear
May  6 18:26:03.154: INFO: Pod pod-88b8df89-370c-4ae7-be74-f3b5e764a703 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:26:03.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6717" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2032,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:26:03.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
May  6 18:26:09.308: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3943 PodName:pod-sharedvolume-7a54a61e-fda9-4cbc-983f-85ab1edcfeef ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 18:26:09.308: INFO: >>> kubeConfig: /root/.kube/config
I0506 18:26:09.347021       7 log.go:172] (0xc0026ac000) (0xc001a80140) Create stream
I0506 18:26:09.347054       7 log.go:172] (0xc0026ac000) (0xc001a80140) Stream added, broadcasting: 1
I0506 18:26:09.349482       7 log.go:172] (0xc0026ac000) Reply frame received for 1
I0506 18:26:09.349532       7 log.go:172] (0xc0026ac000) (0xc001e9a000) Create stream
I0506 18:26:09.349548       7 log.go:172] (0xc0026ac000) (0xc001e9a000) Stream added, broadcasting: 3
I0506 18:26:09.350485       7 log.go:172] (0xc0026ac000) Reply frame received for 3
I0506 18:26:09.350512       7 log.go:172] (0xc0026ac000) (0xc001e9a0a0) Create stream
I0506 18:26:09.350526       7 log.go:172] (0xc0026ac000) (0xc001e9a0a0) Stream added, broadcasting: 5
I0506 18:26:09.351429       7 log.go:172] (0xc0026ac000) Reply frame received for 5
I0506 18:26:09.428802       7 log.go:172] (0xc0026ac000) Data frame received for 5
I0506 18:26:09.428826       7 log.go:172] (0xc001e9a0a0) (5) Data frame handling
I0506 18:26:09.428844       7 log.go:172] (0xc0026ac000) Data frame received for 3
I0506 18:26:09.428851       7 log.go:172] (0xc001e9a000) (3) Data frame handling
I0506 18:26:09.428867       7 log.go:172] (0xc001e9a000) (3) Data frame sent
I0506 18:26:09.428873       7 log.go:172] (0xc0026ac000) Data frame received for 3
I0506 18:26:09.428882       7 log.go:172] (0xc001e9a000) (3) Data frame handling
I0506 18:26:09.430526       7 log.go:172] (0xc0026ac000) Data frame received for 1
I0506 18:26:09.430551       7 log.go:172] (0xc001a80140) (1) Data frame handling
I0506 18:26:09.430568       7 log.go:172] (0xc001a80140) (1) Data frame sent
I0506 18:26:09.430584       7 log.go:172] (0xc0026ac000) (0xc001a80140) Stream removed, broadcasting: 1
I0506 18:26:09.430627       7 log.go:172] (0xc0026ac000) Go away received
I0506 18:26:09.430696       7 log.go:172] (0xc0026ac000) (0xc001a80140) Stream removed, broadcasting: 1
I0506 18:26:09.430713       7 log.go:172] (0xc0026ac000) (0xc001e9a000) Stream removed, broadcasting: 3
I0506 18:26:09.430723       7 log.go:172] (0xc0026ac000) (0xc001e9a0a0) Stream removed, broadcasting: 5
May  6 18:26:09.430: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:26:09.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3943" for this suite.

• [SLOW TEST:6.272 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":128,"skipped":2038,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:26:09.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:26:09.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5988" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":129,"skipped":2053,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:26:09.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
May  6 18:26:20.658: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  6 18:26:20.751: INFO: Pod pod-with-prestop-http-hook still exists
May  6 18:26:22.751: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  6 18:26:22.756: INFO: Pod pod-with-prestop-http-hook still exists
May  6 18:26:24.751: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  6 18:26:24.756: INFO: Pod pod-with-prestop-http-hook still exists
May  6 18:26:26.751: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  6 18:26:26.756: INFO: Pod pod-with-prestop-http-hook still exists
May  6 18:26:28.751: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  6 18:26:28.755: INFO: Pod pod-with-prestop-http-hook still exists
May  6 18:26:30.751: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  6 18:26:30.781: INFO: Pod pod-with-prestop-http-hook still exists
May  6 18:26:32.751: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  6 18:26:32.756: INFO: Pod pod-with-prestop-http-hook still exists
May  6 18:26:34.751: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  6 18:26:34.755: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:26:34.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9957" for this suite.

• [SLOW TEST:25.146 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":130,"skipped":2076,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:26:34.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:26:34.932: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:26:41.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9245" for this suite.

• [SLOW TEST:6.964 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":275,"completed":131,"skipped":2099,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:26:41.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-e499c1fc-2caf-467f-b22e-f23fcab35235
STEP: Creating a pod to test consume configMaps
May  6 18:26:41.865: INFO: Waiting up to 5m0s for pod "pod-configmaps-a15ddd34-09a9-4118-8302-7ecf99fc2f5e" in namespace "configmap-1485" to be "Succeeded or Failed"
May  6 18:26:41.870: INFO: Pod "pod-configmaps-a15ddd34-09a9-4118-8302-7ecf99fc2f5e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.522534ms
May  6 18:26:43.876: INFO: Pod "pod-configmaps-a15ddd34-09a9-4118-8302-7ecf99fc2f5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010591769s
May  6 18:26:46.002: INFO: Pod "pod-configmaps-a15ddd34-09a9-4118-8302-7ecf99fc2f5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136441222s
STEP: Saw pod success
May  6 18:26:46.002: INFO: Pod "pod-configmaps-a15ddd34-09a9-4118-8302-7ecf99fc2f5e" satisfied condition "Succeeded or Failed"
May  6 18:26:46.005: INFO: Trying to get logs from node kali-worker pod pod-configmaps-a15ddd34-09a9-4118-8302-7ecf99fc2f5e container configmap-volume-test: 
STEP: delete the pod
May  6 18:26:46.417: INFO: Waiting for pod pod-configmaps-a15ddd34-09a9-4118-8302-7ecf99fc2f5e to disappear
May  6 18:26:46.461: INFO: Pod pod-configmaps-a15ddd34-09a9-4118-8302-7ecf99fc2f5e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:26:46.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1485" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2104,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:26:46.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-9e230c42-b6a7-4a27-a246-33acb6464855
STEP: Creating a pod to test consume secrets
May  6 18:26:46.882: INFO: Waiting up to 5m0s for pod "pod-secrets-2168350c-7228-483f-a51a-c682416c650e" in namespace "secrets-8959" to be "Succeeded or Failed"
May  6 18:26:46.955: INFO: Pod "pod-secrets-2168350c-7228-483f-a51a-c682416c650e": Phase="Pending", Reason="", readiness=false. Elapsed: 72.878569ms
May  6 18:26:49.039: INFO: Pod "pod-secrets-2168350c-7228-483f-a51a-c682416c650e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156149428s
May  6 18:26:51.068: INFO: Pod "pod-secrets-2168350c-7228-483f-a51a-c682416c650e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.185614383s
STEP: Saw pod success
May  6 18:26:51.068: INFO: Pod "pod-secrets-2168350c-7228-483f-a51a-c682416c650e" satisfied condition "Succeeded or Failed"
May  6 18:26:51.114: INFO: Trying to get logs from node kali-worker pod pod-secrets-2168350c-7228-483f-a51a-c682416c650e container secret-env-test: 
STEP: delete the pod
May  6 18:26:51.207: INFO: Waiting for pod pod-secrets-2168350c-7228-483f-a51a-c682416c650e to disappear
May  6 18:26:51.228: INFO: Pod pod-secrets-2168350c-7228-483f-a51a-c682416c650e no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:26:51.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8959" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":133,"skipped":2118,"failed":0}

------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:26:51.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating pod
May  6 18:26:55.411: INFO: Pod pod-hostip-3c13ebad-8311-4207-8540-e49ac6c2ff94 has hostIP: 172.17.0.18
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:26:55.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3916" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2118,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:26:55.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
May  6 18:26:55.550: INFO: >>> kubeConfig: /root/.kube/config
May  6 18:26:57.513: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:27:09.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-450" for this suite.

• [SLOW TEST:14.269 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":135,"skipped":2139,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:27:09.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  6 18:27:10.095: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a3939309-8344-4d08-a3a6-f0459284bcd4" in namespace "projected-7310" to be "Succeeded or Failed"
May  6 18:27:10.260: INFO: Pod "downwardapi-volume-a3939309-8344-4d08-a3a6-f0459284bcd4": Phase="Pending", Reason="", readiness=false. Elapsed: 164.791387ms
May  6 18:27:12.264: INFO: Pod "downwardapi-volume-a3939309-8344-4d08-a3a6-f0459284bcd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16959011s
May  6 18:27:14.268: INFO: Pod "downwardapi-volume-a3939309-8344-4d08-a3a6-f0459284bcd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.173312739s
STEP: Saw pod success
May  6 18:27:14.268: INFO: Pod "downwardapi-volume-a3939309-8344-4d08-a3a6-f0459284bcd4" satisfied condition "Succeeded or Failed"
May  6 18:27:14.271: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-a3939309-8344-4d08-a3a6-f0459284bcd4 container client-container: 
STEP: delete the pod
May  6 18:27:14.333: INFO: Waiting for pod downwardapi-volume-a3939309-8344-4d08-a3a6-f0459284bcd4 to disappear
May  6 18:27:14.344: INFO: Pod downwardapi-volume-a3939309-8344-4d08-a3a6-f0459284bcd4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:27:14.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7310" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2192,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:27:14.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
May  6 18:27:18.473: INFO: &Pod{ObjectMeta:{send-events-c23ec430-4498-4ef3-a2e8-10cb40f2b055  events-6734 /api/v1/namespaces/events-6734/pods/send-events-c23ec430-4498-4ef3-a2e8-10cb40f2b055 d3cf7184-3c0f-447f-99b5-2e540acddcd0 2064102 0 2020-05-06 18:27:14 +0000 UTC   map[name:foo time:423945163] map[] [] []  [{e2e.test Update v1 2020-05-06 18:27:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:27:17 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n49zx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n49zx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n49zx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:27:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:27:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:27:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:27:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.65,StartTime:2020-05-06 18:27:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 18:27:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://315d7dea4c51524f77432d85338f43667f7f869e25c213a255533f99defcb2b8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
May  6 18:27:20.478: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
May  6 18:27:22.483: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:27:22.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6734" for this suite.

• [SLOW TEST:8.209 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":137,"skipped":2276,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:27:22.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-172
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 18:27:22.789: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 18:27:22.937: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 18:27:25.014: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 18:27:26.991: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 18:27:28.966: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:27:30.941: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:27:32.941: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:27:34.940: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:27:36.990: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:27:38.941: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 18:27:38.948: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  6 18:27:40.952: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  6 18:27:42.953: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  6 18:27:44.952: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 18:27:51.420: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.66 8081 | grep -v '^\s*$'] Namespace:pod-network-test-172 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 18:27:51.420: INFO: >>> kubeConfig: /root/.kube/config
I0506 18:27:51.453435       7 log.go:172] (0xc002d4c790) (0xc001e9b0e0) Create stream
I0506 18:27:51.453485       7 log.go:172] (0xc002d4c790) (0xc001e9b0e0) Stream added, broadcasting: 1
I0506 18:27:51.455234       7 log.go:172] (0xc002d4c790) Reply frame received for 1
I0506 18:27:51.455277       7 log.go:172] (0xc002d4c790) (0xc001a80460) Create stream
I0506 18:27:51.455290       7 log.go:172] (0xc002d4c790) (0xc001a80460) Stream added, broadcasting: 3
I0506 18:27:51.456206       7 log.go:172] (0xc002d4c790) Reply frame received for 3
I0506 18:27:51.456255       7 log.go:172] (0xc002d4c790) (0xc000ebadc0) Create stream
I0506 18:27:51.456268       7 log.go:172] (0xc002d4c790) (0xc000ebadc0) Stream added, broadcasting: 5
I0506 18:27:51.457571       7 log.go:172] (0xc002d4c790) Reply frame received for 5
I0506 18:27:52.518217       7 log.go:172] (0xc002d4c790) Data frame received for 5
I0506 18:27:52.518248       7 log.go:172] (0xc000ebadc0) (5) Data frame handling
I0506 18:27:52.518270       7 log.go:172] (0xc002d4c790) Data frame received for 3
I0506 18:27:52.518295       7 log.go:172] (0xc001a80460) (3) Data frame handling
I0506 18:27:52.518324       7 log.go:172] (0xc001a80460) (3) Data frame sent
I0506 18:27:52.518912       7 log.go:172] (0xc002d4c790) Data frame received for 3
I0506 18:27:52.518932       7 log.go:172] (0xc001a80460) (3) Data frame handling
I0506 18:27:52.519748       7 log.go:172] (0xc002d4c790) Data frame received for 1
I0506 18:27:52.519766       7 log.go:172] (0xc001e9b0e0) (1) Data frame handling
I0506 18:27:52.519778       7 log.go:172] (0xc001e9b0e0) (1) Data frame sent
I0506 18:27:52.519798       7 log.go:172] (0xc002d4c790) (0xc001e9b0e0) Stream removed, broadcasting: 1
I0506 18:27:52.519819       7 log.go:172] (0xc002d4c790) Go away received
I0506 18:27:52.519973       7 log.go:172] (0xc002d4c790) (0xc001e9b0e0) Stream removed, broadcasting: 1
I0506 18:27:52.519999       7 log.go:172] (0xc002d4c790) (0xc001a80460) Stream removed, broadcasting: 3
I0506 18:27:52.520008       7 log.go:172] (0xc002d4c790) (0xc000ebadc0) Stream removed, broadcasting: 5
May  6 18:27:52.520: INFO: Found all expected endpoints: [netserver-0]
May  6 18:27:52.523: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.84 8081 | grep -v '^\s*$'] Namespace:pod-network-test-172 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 18:27:52.523: INFO: >>> kubeConfig: /root/.kube/config
I0506 18:27:52.547977       7 log.go:172] (0xc002aa2580) (0xc001996640) Create stream
I0506 18:27:52.548009       7 log.go:172] (0xc002aa2580) (0xc001996640) Stream added, broadcasting: 1
I0506 18:27:52.550296       7 log.go:172] (0xc002aa2580) Reply frame received for 1
I0506 18:27:52.550336       7 log.go:172] (0xc002aa2580) (0xc001e9b180) Create stream
I0506 18:27:52.550351       7 log.go:172] (0xc002aa2580) (0xc001e9b180) Stream added, broadcasting: 3
I0506 18:27:52.551169       7 log.go:172] (0xc002aa2580) Reply frame received for 3
I0506 18:27:52.551189       7 log.go:172] (0xc002aa2580) (0xc001e9b220) Create stream
I0506 18:27:52.551200       7 log.go:172] (0xc002aa2580) (0xc001e9b220) Stream added, broadcasting: 5
I0506 18:27:52.552196       7 log.go:172] (0xc002aa2580) Reply frame received for 5
I0506 18:27:53.610333       7 log.go:172] (0xc002aa2580) Data frame received for 3
I0506 18:27:53.610367       7 log.go:172] (0xc001e9b180) (3) Data frame handling
I0506 18:27:53.610379       7 log.go:172] (0xc001e9b180) (3) Data frame sent
I0506 18:27:53.610386       7 log.go:172] (0xc002aa2580) Data frame received for 3
I0506 18:27:53.610391       7 log.go:172] (0xc001e9b180) (3) Data frame handling
I0506 18:27:53.610588       7 log.go:172] (0xc002aa2580) Data frame received for 5
I0506 18:27:53.610613       7 log.go:172] (0xc001e9b220) (5) Data frame handling
I0506 18:27:53.612367       7 log.go:172] (0xc002aa2580) Data frame received for 1
I0506 18:27:53.612381       7 log.go:172] (0xc001996640) (1) Data frame handling
I0506 18:27:53.612398       7 log.go:172] (0xc001996640) (1) Data frame sent
I0506 18:27:53.612406       7 log.go:172] (0xc002aa2580) (0xc001996640) Stream removed, broadcasting: 1
I0506 18:27:53.612423       7 log.go:172] (0xc002aa2580) Go away received
I0506 18:27:53.612558       7 log.go:172] (0xc002aa2580) (0xc001996640) Stream removed, broadcasting: 1
I0506 18:27:53.612571       7 log.go:172] (0xc002aa2580) (0xc001e9b180) Stream removed, broadcasting: 3
I0506 18:27:53.612577       7 log.go:172] (0xc002aa2580) (0xc001e9b220) Stream removed, broadcasting: 5
May  6 18:27:53.612: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:27:53.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-172" for this suite.

• [SLOW TEST:31.096 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2289,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:27:53.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-1423
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1423 to expose endpoints map[]
May  6 18:27:54.516: INFO: Get endpoints failed (4.052315ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
May  6 18:27:55.520: INFO: successfully validated that service multi-endpoint-test in namespace services-1423 exposes endpoints map[] (1.00783581s elapsed)
STEP: Creating pod pod1 in namespace services-1423
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1423 to expose endpoints map[pod1:[100]]
May  6 18:27:59.717: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.189874069s elapsed, will retry)
May  6 18:28:01.018: INFO: successfully validated that service multi-endpoint-test in namespace services-1423 exposes endpoints map[pod1:[100]] (5.490922643s elapsed)
STEP: Creating pod pod2 in namespace services-1423
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1423 to expose endpoints map[pod1:[100] pod2:[101]]
May  6 18:28:04.174: INFO: successfully validated that service multi-endpoint-test in namespace services-1423 exposes endpoints map[pod1:[100] pod2:[101]] (3.152416601s elapsed)
STEP: Deleting pod pod1 in namespace services-1423
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1423 to expose endpoints map[pod2:[101]]
May  6 18:28:05.394: INFO: successfully validated that service multi-endpoint-test in namespace services-1423 exposes endpoints map[pod2:[101]] (1.214635672s elapsed)
STEP: Deleting pod pod2 in namespace services-1423
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1423 to expose endpoints map[]
May  6 18:28:06.689: INFO: successfully validated that service multi-endpoint-test in namespace services-1423 exposes endpoints map[] (1.291469178s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:28:06.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1423" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:13.456 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":139,"skipped":2305,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:28:07.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
May  6 18:28:07.194: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-7230 -- logs-generator --log-lines-total 100 --run-duration 20s'
May  6 18:28:07.310: INFO: stderr: ""
May  6 18:28:07.310: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
May  6 18:28:07.310: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
May  6 18:28:07.310: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7230" to be "running and ready, or succeeded"
May  6 18:28:07.325: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 14.11887ms
May  6 18:28:09.392: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081057869s
May  6 18:28:11.712: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.401371028s
May  6 18:28:13.739: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428521955s
May  6 18:28:15.743: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.432703311s
May  6 18:28:15.743: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
May  6 18:28:15.743: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
May  6 18:28:15.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7230'
May  6 18:28:15.906: INFO: stderr: ""
May  6 18:28:15.906: INFO: stdout: "I0506 18:28:12.895152       1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/vnr 586\nI0506 18:28:13.095425       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/65d 415\nI0506 18:28:13.295330       1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/phm 596\nI0506 18:28:13.495297       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/x2t 392\nI0506 18:28:13.695341       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/74v 495\nI0506 18:28:13.895347       1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/rv7 310\nI0506 18:28:14.095336       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/b65 370\nI0506 18:28:14.295330       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/prk4 422\nI0506 18:28:14.495382       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/tz7 478\nI0506 18:28:14.695325       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/bnhf 549\nI0506 18:28:14.895319       1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/hzcb 389\nI0506 18:28:15.095334       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/ztx 499\nI0506 18:28:15.295326       1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/sgwj 367\nI0506 18:28:15.495358       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/72fz 391\nI0506 18:28:15.695396       1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/r4xt 346\nI0506 18:28:15.895308       1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/nzh 508\n"
STEP: limiting log lines
May  6 18:28:15.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7230 --tail=1'
May  6 18:28:16.057: INFO: stderr: ""
May  6 18:28:16.057: INFO: stdout: "I0506 18:28:15.895308       1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/nzh 508\n"
May  6 18:28:16.057: INFO: got output "I0506 18:28:15.895308       1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/nzh 508\n"
STEP: limiting log bytes
May  6 18:28:16.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7230 --limit-bytes=1'
May  6 18:28:16.356: INFO: stderr: ""
May  6 18:28:16.356: INFO: stdout: "I"
May  6 18:28:16.356: INFO: got output "I"
STEP: exposing timestamps
May  6 18:28:16.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7230 --tail=1 --timestamps'
May  6 18:28:16.463: INFO: stderr: ""
May  6 18:28:16.463: INFO: stdout: "2020-05-06T18:28:16.295529171Z I0506 18:28:16.295338       1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/vf8c 254\n"
May  6 18:28:16.463: INFO: got output "2020-05-06T18:28:16.295529171Z I0506 18:28:16.295338       1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/vf8c 254\n"
STEP: restricting to a time range
May  6 18:28:18.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7230 --since=1s'
May  6 18:28:19.068: INFO: stderr: ""
May  6 18:28:19.068: INFO: stdout: "I0506 18:28:18.095296       1 logs_generator.go:76] 26 GET /api/v1/namespaces/kube-system/pods/t9f4 435\nI0506 18:28:18.295310       1 logs_generator.go:76] 27 GET /api/v1/namespaces/kube-system/pods/7zd 337\nI0506 18:28:18.495373       1 logs_generator.go:76] 28 GET /api/v1/namespaces/default/pods/dtg 501\nI0506 18:28:18.695348       1 logs_generator.go:76] 29 GET /api/v1/namespaces/kube-system/pods/9m7 235\nI0506 18:28:18.895319       1 logs_generator.go:76] 30 PUT /api/v1/namespaces/ns/pods/nmj 454\n"
May  6 18:28:19.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7230 --since=24h'
May  6 18:28:19.179: INFO: stderr: ""
May  6 18:28:19.179: INFO: stdout: "I0506 18:28:12.895152       1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/vnr 586\nI0506 18:28:13.095425       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/65d 415\nI0506 18:28:13.295330       1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/phm 596\nI0506 18:28:13.495297       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/x2t 392\nI0506 18:28:13.695341       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/74v 495\nI0506 18:28:13.895347       1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/rv7 310\nI0506 18:28:14.095336       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/b65 370\nI0506 18:28:14.295330       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/prk4 422\nI0506 18:28:14.495382       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/tz7 478\nI0506 18:28:14.695325       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/bnhf 549\nI0506 18:28:14.895319       1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/hzcb 389\nI0506 18:28:15.095334       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/ztx 499\nI0506 18:28:15.295326       1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/sgwj 367\nI0506 18:28:15.495358       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/72fz 391\nI0506 18:28:15.695396       1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/r4xt 346\nI0506 18:28:15.895308       1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/nzh 508\nI0506 18:28:16.095317       1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/5d5f 375\nI0506 18:28:16.295338       1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/vf8c 254\nI0506 18:28:16.495338       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/brdc 514\nI0506 18:28:16.695356       1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/6v2x 476\nI0506 18:28:16.895320       1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/kbhx 226\nI0506 18:28:17.095365       1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/p7b 520\nI0506 18:28:17.295341       1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/rtfp 220\nI0506 18:28:17.495349       1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/2d5 225\nI0506 18:28:17.695370       1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/nqwk 534\nI0506 18:28:17.895337       1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/msz 329\nI0506 18:28:18.095296       1 logs_generator.go:76] 26 GET /api/v1/namespaces/kube-system/pods/t9f4 435\nI0506 18:28:18.295310       1 logs_generator.go:76] 27 GET /api/v1/namespaces/kube-system/pods/7zd 337\nI0506 18:28:18.495373       1 logs_generator.go:76] 28 GET /api/v1/namespaces/default/pods/dtg 501\nI0506 18:28:18.695348       1 logs_generator.go:76] 29 GET /api/v1/namespaces/kube-system/pods/9m7 235\nI0506 18:28:18.895319       1 logs_generator.go:76] 30 PUT /api/v1/namespaces/ns/pods/nmj 454\nI0506 18:28:19.095316       1 logs_generator.go:76] 31 POST /api/v1/namespaces/default/pods/lzn2 230\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
May  6 18:28:19.179: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7230'
May  6 18:28:22.305: INFO: stderr: ""
May  6 18:28:22.305: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:28:22.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7230" for this suite.

• [SLOW TEST:15.182 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":140,"skipped":2309,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:28:22.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May  6 18:28:27.918: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:28:28.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9138" for this suite.

• [SLOW TEST:6.032 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2325,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:28:28.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
May  6 18:28:28.642: INFO: Waiting up to 5m0s for pod "pod-b2f2c07e-2bc1-4ead-82ef-2a18d34650cd" in namespace "emptydir-6679" to be "Succeeded or Failed"
May  6 18:28:28.711: INFO: Pod "pod-b2f2c07e-2bc1-4ead-82ef-2a18d34650cd": Phase="Pending", Reason="", readiness=false. Elapsed: 69.117278ms
May  6 18:28:30.715: INFO: Pod "pod-b2f2c07e-2bc1-4ead-82ef-2a18d34650cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073101038s
May  6 18:28:32.719: INFO: Pod "pod-b2f2c07e-2bc1-4ead-82ef-2a18d34650cd": Phase="Running", Reason="", readiness=true. Elapsed: 4.077484079s
May  6 18:28:34.723: INFO: Pod "pod-b2f2c07e-2bc1-4ead-82ef-2a18d34650cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.081691061s
STEP: Saw pod success
May  6 18:28:34.724: INFO: Pod "pod-b2f2c07e-2bc1-4ead-82ef-2a18d34650cd" satisfied condition "Succeeded or Failed"
May  6 18:28:34.726: INFO: Trying to get logs from node kali-worker pod pod-b2f2c07e-2bc1-4ead-82ef-2a18d34650cd container test-container: 
STEP: delete the pod
May  6 18:28:34.788: INFO: Waiting for pod pod-b2f2c07e-2bc1-4ead-82ef-2a18d34650cd to disappear
May  6 18:28:34.793: INFO: Pod pod-b2f2c07e-2bc1-4ead-82ef-2a18d34650cd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:28:34.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6679" for this suite.

• [SLOW TEST:6.456 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2335,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:28:34.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
May  6 18:28:34.940: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-608 /api/v1/namespaces/watch-608/configmaps/e2e-watch-test-label-changed 3a114ed6-e6d7-4996-b887-11056dec7e19 2064543 0 2020-05-06 18:28:34 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-06 18:28:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May  6 18:28:34.940: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-608 /api/v1/namespaces/watch-608/configmaps/e2e-watch-test-label-changed 3a114ed6-e6d7-4996-b887-11056dec7e19 2064544 0 2020-05-06 18:28:34 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-06 18:28:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
May  6 18:28:34.940: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-608 /api/v1/namespaces/watch-608/configmaps/e2e-watch-test-label-changed 3a114ed6-e6d7-4996-b887-11056dec7e19 2064545 0 2020-05-06 18:28:34 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-06 18:28:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
May  6 18:28:44.963: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-608 /api/v1/namespaces/watch-608/configmaps/e2e-watch-test-label-changed 3a114ed6-e6d7-4996-b887-11056dec7e19 2064585 0 2020-05-06 18:28:34 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-06 18:28:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May  6 18:28:44.963: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-608 /api/v1/namespaces/watch-608/configmaps/e2e-watch-test-label-changed 3a114ed6-e6d7-4996-b887-11056dec7e19 2064586 0 2020-05-06 18:28:34 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-06 18:28:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
May  6 18:28:44.963: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-608 /api/v1/namespaces/watch-608/configmaps/e2e-watch-test-label-changed 3a114ed6-e6d7-4996-b887-11056dec7e19 2064587 0 2020-05-06 18:28:34 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-06 18:28:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:28:44.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-608" for this suite.

• [SLOW TEST:10.170 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":143,"skipped":2452,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:28:44.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-8bd223f0-7e0f-44cc-80bc-149f77056ff4
STEP: Creating a pod to test consume secrets
May  6 18:28:45.075: INFO: Waiting up to 5m0s for pod "pod-secrets-d9606569-c239-4603-a2e9-3711950324b6" in namespace "secrets-9770" to be "Succeeded or Failed"
May  6 18:28:45.102: INFO: Pod "pod-secrets-d9606569-c239-4603-a2e9-3711950324b6": Phase="Pending", Reason="", readiness=false. Elapsed: 26.503188ms
May  6 18:28:47.106: INFO: Pod "pod-secrets-d9606569-c239-4603-a2e9-3711950324b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030924251s
May  6 18:28:49.110: INFO: Pod "pod-secrets-d9606569-c239-4603-a2e9-3711950324b6": Phase="Running", Reason="", readiness=true. Elapsed: 4.035003525s
May  6 18:28:51.114: INFO: Pod "pod-secrets-d9606569-c239-4603-a2e9-3711950324b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038477705s
STEP: Saw pod success
May  6 18:28:51.114: INFO: Pod "pod-secrets-d9606569-c239-4603-a2e9-3711950324b6" satisfied condition "Succeeded or Failed"
May  6 18:28:51.117: INFO: Trying to get logs from node kali-worker pod pod-secrets-d9606569-c239-4603-a2e9-3711950324b6 container secret-volume-test: 
STEP: delete the pod
May  6 18:28:51.170: INFO: Waiting for pod pod-secrets-d9606569-c239-4603-a2e9-3711950324b6 to disappear
May  6 18:28:51.200: INFO: Pod pod-secrets-d9606569-c239-4603-a2e9-3711950324b6 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:28:51.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9770" for this suite.

• [SLOW TEST:6.236 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2465,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:28:51.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service nodeport-service with the type=NodePort in namespace services-2508
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-2508
STEP: creating replication controller externalsvc in namespace services-2508
I0506 18:28:51.728912       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2508, replica count: 2
I0506 18:28:54.779351       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:28:57.779614       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:29:00.779880       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
May  6 18:29:00.872: INFO: Creating new exec pod
May  6 18:29:04.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-2508 execpodc88wm -- /bin/sh -x -c nslookup nodeport-service'
May  6 18:29:05.239: INFO: stderr: "I0506 18:29:05.151871    2638 log.go:172] (0xc0009009a0) (0xc0006c83c0) Create stream\nI0506 18:29:05.151929    2638 log.go:172] (0xc0009009a0) (0xc0006c83c0) Stream added, broadcasting: 1\nI0506 18:29:05.154293    2638 log.go:172] (0xc0009009a0) Reply frame received for 1\nI0506 18:29:05.154335    2638 log.go:172] (0xc0009009a0) (0xc00064d360) Create stream\nI0506 18:29:05.154349    2638 log.go:172] (0xc0009009a0) (0xc00064d360) Stream added, broadcasting: 3\nI0506 18:29:05.155255    2638 log.go:172] (0xc0009009a0) Reply frame received for 3\nI0506 18:29:05.155298    2638 log.go:172] (0xc0009009a0) (0xc0004f32c0) Create stream\nI0506 18:29:05.155323    2638 log.go:172] (0xc0009009a0) (0xc0004f32c0) Stream added, broadcasting: 5\nI0506 18:29:05.156150    2638 log.go:172] (0xc0009009a0) Reply frame received for 5\nI0506 18:29:05.222896    2638 log.go:172] (0xc0009009a0) Data frame received for 5\nI0506 18:29:05.222929    2638 log.go:172] (0xc0004f32c0) (5) Data frame handling\nI0506 18:29:05.222944    2638 log.go:172] (0xc0004f32c0) (5) Data frame sent\n+ nslookup nodeport-service\nI0506 18:29:05.229834    2638 log.go:172] (0xc0009009a0) Data frame received for 3\nI0506 18:29:05.229857    2638 log.go:172] (0xc00064d360) (3) Data frame handling\nI0506 18:29:05.229875    2638 log.go:172] (0xc00064d360) (3) Data frame sent\nI0506 18:29:05.230957    2638 log.go:172] (0xc0009009a0) Data frame received for 3\nI0506 18:29:05.230975    2638 log.go:172] (0xc00064d360) (3) Data frame handling\nI0506 18:29:05.230985    2638 log.go:172] (0xc00064d360) (3) Data frame sent\nI0506 18:29:05.231589    2638 log.go:172] (0xc0009009a0) Data frame received for 5\nI0506 18:29:05.231620    2638 log.go:172] (0xc0004f32c0) (5) Data frame handling\nI0506 18:29:05.231819    2638 log.go:172] (0xc0009009a0) Data frame received for 3\nI0506 18:29:05.231852    2638 log.go:172] (0xc00064d360) (3) Data frame handling\nI0506 18:29:05.233537    2638 log.go:172] (0xc0009009a0) Data frame received for 1\nI0506 18:29:05.233555    2638 log.go:172] (0xc0006c83c0) (1) Data frame handling\nI0506 18:29:05.233563    2638 log.go:172] (0xc0006c83c0) (1) Data frame sent\nI0506 18:29:05.233572    2638 log.go:172] (0xc0009009a0) (0xc0006c83c0) Stream removed, broadcasting: 1\nI0506 18:29:05.233762    2638 log.go:172] (0xc0009009a0) Go away received\nI0506 18:29:05.233994    2638 log.go:172] (0xc0009009a0) (0xc0006c83c0) Stream removed, broadcasting: 1\nI0506 18:29:05.234022    2638 log.go:172] (0xc0009009a0) (0xc00064d360) Stream removed, broadcasting: 3\nI0506 18:29:05.234034    2638 log.go:172] (0xc0009009a0) (0xc0004f32c0) Stream removed, broadcasting: 5\n"
May  6 18:29:05.239: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2508.svc.cluster.local\tcanonical name = externalsvc.services-2508.svc.cluster.local.\nName:\texternalsvc.services-2508.svc.cluster.local\nAddress: 10.104.175.30\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-2508, will wait for the garbage collector to delete the pods
May  6 18:29:05.298: INFO: Deleting ReplicationController externalsvc took: 5.554378ms
May  6 18:29:05.699: INFO: Terminating ReplicationController externalsvc pods took: 400.253641ms
May  6 18:29:14.583: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:29:15.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2508" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:25.332 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":145,"skipped":2509,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:29:16.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
May  6 18:29:20.089: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:20.092: INFO: Number of nodes with available pods: 0
May  6 18:29:20.092: INFO: Node kali-worker is running more than one daemon pod
May  6 18:29:21.794: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:22.151: INFO: Number of nodes with available pods: 0
May  6 18:29:22.151: INFO: Node kali-worker is running more than one daemon pod
May  6 18:29:23.407: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:23.986: INFO: Number of nodes with available pods: 0
May  6 18:29:23.986: INFO: Node kali-worker is running more than one daemon pod
May  6 18:29:24.256: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:24.813: INFO: Number of nodes with available pods: 0
May  6 18:29:24.813: INFO: Node kali-worker is running more than one daemon pod
May  6 18:29:25.182: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:25.271: INFO: Number of nodes with available pods: 0
May  6 18:29:25.271: INFO: Node kali-worker is running more than one daemon pod
May  6 18:29:26.108: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:26.114: INFO: Number of nodes with available pods: 0
May  6 18:29:26.114: INFO: Node kali-worker is running more than one daemon pod
May  6 18:29:27.112: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:27.116: INFO: Number of nodes with available pods: 0
May  6 18:29:27.116: INFO: Node kali-worker is running more than one daemon pod
May  6 18:29:28.105: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:28.108: INFO: Number of nodes with available pods: 2
May  6 18:29:28.109: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
May  6 18:29:28.513: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:28.518: INFO: Number of nodes with available pods: 1
May  6 18:29:28.518: INFO: Node kali-worker is running more than one daemon pod
May  6 18:29:29.523: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:29.526: INFO: Number of nodes with available pods: 1
May  6 18:29:29.526: INFO: Node kali-worker is running more than one daemon pod
May  6 18:29:30.523: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:30.535: INFO: Number of nodes with available pods: 1
May  6 18:29:30.535: INFO: Node kali-worker is running more than one daemon pod
May  6 18:29:31.524: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:31.527: INFO: Number of nodes with available pods: 1
May  6 18:29:31.527: INFO: Node kali-worker is running more than one daemon pod
May  6 18:29:32.524: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:32.528: INFO: Number of nodes with available pods: 1
May  6 18:29:32.528: INFO: Node kali-worker is running more than one daemon pod
May  6 18:29:33.523: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:33.527: INFO: Number of nodes with available pods: 1
May  6 18:29:33.527: INFO: Node kali-worker is running more than one daemon pod
May  6 18:29:34.524: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:34.528: INFO: Number of nodes with available pods: 1
May  6 18:29:34.528: INFO: Node kali-worker is running more than one daemon pod
May  6 18:29:35.669: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:35.672: INFO: Number of nodes with available pods: 1
May  6 18:29:35.672: INFO: Node kali-worker is running more than one daemon pod
May  6 18:29:36.523: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:36.527: INFO: Number of nodes with available pods: 1
May  6 18:29:36.527: INFO: Node kali-worker is running more than one daemon pod
May  6 18:29:37.532: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:29:37.535: INFO: Number of nodes with available pods: 2
May  6 18:29:37.535: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8648, will wait for the garbage collector to delete the pods
May  6 18:29:37.601: INFO: Deleting DaemonSet.extensions daemon-set took: 7.187156ms
May  6 18:29:37.701: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.276974ms
May  6 18:29:43.811: INFO: Number of nodes with available pods: 0
May  6 18:29:43.811: INFO: Number of running nodes: 0, number of available pods: 0
May  6 18:29:43.814: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8648/daemonsets","resourceVersion":"2064935"},"items":null}

May  6 18:29:43.816: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8648/pods","resourceVersion":"2064935"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:29:43.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8648" for this suite.

• [SLOW TEST:27.293 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":146,"skipped":2517,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:29:43.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-4767
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-4767
I0506 18:29:44.010558       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4767, replica count: 2
I0506 18:29:47.061017       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:29:50.061495       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  6 18:29:50.061: INFO: Creating new exec pod
May  6 18:29:57.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4767 execpodk24zb -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
May  6 18:29:57.421: INFO: stderr: "I0506 18:29:57.341517    2660 log.go:172] (0xc000a3c000) (0xc0009f8000) Create stream\nI0506 18:29:57.341575    2660 log.go:172] (0xc000a3c000) (0xc0009f8000) Stream added, broadcasting: 1\nI0506 18:29:57.343719    2660 log.go:172] (0xc000a3c000) Reply frame received for 1\nI0506 18:29:57.343744    2660 log.go:172] (0xc000a3c000) (0xc0009f80a0) Create stream\nI0506 18:29:57.343752    2660 log.go:172] (0xc000a3c000) (0xc0009f80a0) Stream added, broadcasting: 3\nI0506 18:29:57.344420    2660 log.go:172] (0xc000a3c000) Reply frame received for 3\nI0506 18:29:57.344462    2660 log.go:172] (0xc000a3c000) (0xc000448000) Create stream\nI0506 18:29:57.344476    2660 log.go:172] (0xc000a3c000) (0xc000448000) Stream added, broadcasting: 5\nI0506 18:29:57.345307    2660 log.go:172] (0xc000a3c000) Reply frame received for 5\nI0506 18:29:57.415681    2660 log.go:172] (0xc000a3c000) Data frame received for 5\nI0506 18:29:57.415705    2660 log.go:172] (0xc000448000) (5) Data frame handling\nI0506 18:29:57.415720    2660 log.go:172] (0xc000448000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0506 18:29:57.415995    2660 log.go:172] (0xc000a3c000) Data frame received for 5\nI0506 18:29:57.416067    2660 log.go:172] (0xc000448000) (5) Data frame handling\nI0506 18:29:57.416112    2660 log.go:172] (0xc000448000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0506 18:29:57.416182    2660 log.go:172] (0xc000a3c000) Data frame received for 5\nI0506 18:29:57.416199    2660 log.go:172] (0xc000448000) (5) Data frame handling\nI0506 18:29:57.416339    2660 log.go:172] (0xc000a3c000) Data frame received for 3\nI0506 18:29:57.416351    2660 log.go:172] (0xc0009f80a0) (3) Data frame handling\nI0506 18:29:57.417804    2660 log.go:172] (0xc000a3c000) Data frame received for 1\nI0506 18:29:57.417821    2660 log.go:172] (0xc0009f8000) (1) Data frame handling\nI0506 18:29:57.417832    2660 log.go:172] (0xc0009f8000) (1) Data frame sent\nI0506 18:29:57.417845    2660 log.go:172] (0xc000a3c000) (0xc0009f8000) Stream removed, broadcasting: 1\nI0506 18:29:57.417947    2660 log.go:172] (0xc000a3c000) Go away received\nI0506 18:29:57.418081    2660 log.go:172] (0xc000a3c000) (0xc0009f8000) Stream removed, broadcasting: 1\nI0506 18:29:57.418092    2660 log.go:172] (0xc000a3c000) (0xc0009f80a0) Stream removed, broadcasting: 3\nI0506 18:29:57.418097    2660 log.go:172] (0xc000a3c000) (0xc000448000) Stream removed, broadcasting: 5\n"
May  6 18:29:57.421: INFO: stdout: ""
May  6 18:29:57.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4767 execpodk24zb -- /bin/sh -x -c nc -zv -t -w 2 10.96.74.173 80'
May  6 18:29:57.849: INFO: stderr: "I0506 18:29:57.775223    2680 log.go:172] (0xc000b51600) (0xc000c20780) Create stream\nI0506 18:29:57.775282    2680 log.go:172] (0xc000b51600) (0xc000c20780) Stream added, broadcasting: 1\nI0506 18:29:57.779413    2680 log.go:172] (0xc000b51600) Reply frame received for 1\nI0506 18:29:57.779471    2680 log.go:172] (0xc000b51600) (0xc0005eb5e0) Create stream\nI0506 18:29:57.779492    2680 log.go:172] (0xc000b51600) (0xc0005eb5e0) Stream added, broadcasting: 3\nI0506 18:29:57.780559    2680 log.go:172] (0xc000b51600) Reply frame received for 3\nI0506 18:29:57.780603    2680 log.go:172] (0xc000b51600) (0xc000518a00) Create stream\nI0506 18:29:57.780615    2680 log.go:172] (0xc000b51600) (0xc000518a00) Stream added, broadcasting: 5\nI0506 18:29:57.781953    2680 log.go:172] (0xc000b51600) Reply frame received for 5\nI0506 18:29:57.840496    2680 log.go:172] (0xc000b51600) Data frame received for 3\nI0506 18:29:57.840528    2680 log.go:172] (0xc0005eb5e0) (3) Data frame handling\nI0506 18:29:57.840572    2680 log.go:172] (0xc000b51600) Data frame received for 5\nI0506 18:29:57.840613    2680 log.go:172] (0xc000518a00) (5) Data frame handling\nI0506 18:29:57.840637    2680 log.go:172] (0xc000518a00) (5) Data frame sent\nI0506 18:29:57.840652    2680 log.go:172] (0xc000b51600) Data frame received for 5\nI0506 18:29:57.840667    2680 log.go:172] (0xc000518a00) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.74.173 80\nConnection to 10.96.74.173 80 port [tcp/http] succeeded!\nI0506 18:29:57.843029    2680 log.go:172] (0xc000b51600) Data frame received for 1\nI0506 18:29:57.843062    2680 log.go:172] (0xc000c20780) (1) Data frame handling\nI0506 18:29:57.843095    2680 log.go:172] (0xc000c20780) (1) Data frame sent\nI0506 18:29:57.843123    2680 log.go:172] (0xc000b51600) (0xc000c20780) Stream removed, broadcasting: 1\nI0506 18:29:57.843240    2680 log.go:172] (0xc000b51600) Go away received\nI0506 18:29:57.843606    2680 log.go:172] (0xc000b51600) (0xc000c20780) Stream removed, broadcasting: 1\nI0506 18:29:57.843619    2680 log.go:172] (0xc000b51600) (0xc0005eb5e0) Stream removed, broadcasting: 3\nI0506 18:29:57.843625    2680 log.go:172] (0xc000b51600) (0xc000518a00) Stream removed, broadcasting: 5\n"
May  6 18:29:57.849: INFO: stdout: ""
May  6 18:29:57.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4767 execpodk24zb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 30296'
May  6 18:29:58.143: INFO: stderr: "I0506 18:29:58.071884    2702 log.go:172] (0xc00001bc30) (0xc0006774a0) Create stream\nI0506 18:29:58.071943    2702 log.go:172] (0xc00001bc30) (0xc0006774a0) Stream added, broadcasting: 1\nI0506 18:29:58.074652    2702 log.go:172] (0xc00001bc30) Reply frame received for 1\nI0506 18:29:58.074684    2702 log.go:172] (0xc00001bc30) (0xc000a0c000) Create stream\nI0506 18:29:58.074693    2702 log.go:172] (0xc00001bc30) (0xc000a0c000) Stream added, broadcasting: 3\nI0506 18:29:58.075530    2702 log.go:172] (0xc00001bc30) Reply frame received for 3\nI0506 18:29:58.075578    2702 log.go:172] (0xc00001bc30) (0xc000026000) Create stream\nI0506 18:29:58.075593    2702 log.go:172] (0xc00001bc30) (0xc000026000) Stream added, broadcasting: 5\nI0506 18:29:58.076518    2702 log.go:172] (0xc00001bc30) Reply frame received for 5\nI0506 18:29:58.137929    2702 log.go:172] (0xc00001bc30) Data frame received for 3\nI0506 18:29:58.137950    2702 log.go:172] (0xc000a0c000) (3) Data frame handling\nI0506 18:29:58.137991    2702 log.go:172] (0xc00001bc30) Data frame received for 5\nI0506 18:29:58.138020    2702 log.go:172] (0xc000026000) (5) Data frame handling\nI0506 18:29:58.138037    2702 log.go:172] (0xc000026000) (5) Data frame sent\nI0506 18:29:58.138048    2702 log.go:172] (0xc00001bc30) Data frame received for 5\nI0506 18:29:58.138053    2702 log.go:172] (0xc000026000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 30296\nConnection to 172.17.0.15 30296 port [tcp/30296] succeeded!\nI0506 18:29:58.139678    2702 log.go:172] (0xc00001bc30) Data frame received for 1\nI0506 18:29:58.139690    2702 log.go:172] (0xc0006774a0) (1) Data frame handling\nI0506 18:29:58.139720    2702 log.go:172] (0xc0006774a0) (1) Data frame sent\nI0506 18:29:58.139735    2702 log.go:172] (0xc00001bc30) (0xc0006774a0) Stream removed, broadcasting: 1\nI0506 18:29:58.140088    2702 log.go:172] (0xc00001bc30) (0xc0006774a0) Stream removed, broadcasting: 1\nI0506 18:29:58.140108    2702 log.go:172] (0xc00001bc30) (0xc000a0c000) Stream removed, broadcasting: 3\nI0506 18:29:58.140115    2702 log.go:172] (0xc00001bc30) (0xc000026000) Stream removed, broadcasting: 5\nI0506 18:29:58.140131    2702 log.go:172] (0xc00001bc30) Go away received\n"
May  6 18:29:58.143: INFO: stdout: ""
May  6 18:29:58.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-4767 execpodk24zb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 30296'
May  6 18:29:58.354: INFO: stderr: "I0506 18:29:58.267167    2722 log.go:172] (0xc0003e1e40) (0xc0006ed4a0) Create stream\nI0506 18:29:58.267238    2722 log.go:172] (0xc0003e1e40) (0xc0006ed4a0) Stream added, broadcasting: 1\nI0506 18:29:58.270619    2722 log.go:172] (0xc0003e1e40) Reply frame received for 1\nI0506 18:29:58.270681    2722 log.go:172] (0xc0003e1e40) (0xc0008e6000) Create stream\nI0506 18:29:58.270704    2722 log.go:172] (0xc0003e1e40) (0xc0008e6000) Stream added, broadcasting: 3\nI0506 18:29:58.271673    2722 log.go:172] (0xc0003e1e40) Reply frame received for 3\nI0506 18:29:58.271716    2722 log.go:172] (0xc0003e1e40) (0xc0006ed540) Create stream\nI0506 18:29:58.271723    2722 log.go:172] (0xc0003e1e40) (0xc0006ed540) Stream added, broadcasting: 5\nI0506 18:29:58.272554    2722 log.go:172] (0xc0003e1e40) Reply frame received for 5\nI0506 18:29:58.345455    2722 log.go:172] (0xc0003e1e40) Data frame received for 5\nI0506 18:29:58.345584    2722 log.go:172] (0xc0006ed540) (5) Data frame handling\nI0506 18:29:58.345655    2722 log.go:172] (0xc0006ed540) (5) Data frame sent\nI0506 18:29:58.345751    2722 log.go:172] (0xc0003e1e40) Data frame received for 5\nI0506 18:29:58.345846    2722 log.go:172] (0xc0006ed540) (5) Data frame handling\nI0506 18:29:58.345926    2722 log.go:172] (0xc0003e1e40) Data frame received for 3\nI0506 18:29:58.345955    2722 log.go:172] (0xc0008e6000) (3) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 30296\nConnection to 172.17.0.18 30296 port [tcp/30296] succeeded!\nI0506 18:29:58.345998    2722 log.go:172] (0xc0006ed540) (5) Data frame sent\nI0506 18:29:58.346674    2722 log.go:172] (0xc0003e1e40) Data frame received for 5\nI0506 18:29:58.346697    2722 log.go:172] (0xc0006ed540) (5) Data frame handling\nI0506 18:29:58.348613    2722 log.go:172] (0xc0003e1e40) Data frame received for 1\nI0506 18:29:58.348641    2722 log.go:172] (0xc0006ed4a0) (1) Data frame handling\nI0506 18:29:58.348654    2722 log.go:172] (0xc0006ed4a0) (1) Data frame sent\nI0506 18:29:58.348666    2722 log.go:172] (0xc0003e1e40) (0xc0006ed4a0) Stream removed, broadcasting: 1\nI0506 18:29:58.348764    2722 log.go:172] (0xc0003e1e40) Go away received\nI0506 18:29:58.349033    2722 log.go:172] (0xc0003e1e40) (0xc0006ed4a0) Stream removed, broadcasting: 1\nI0506 18:29:58.349063    2722 log.go:172] (0xc0003e1e40) (0xc0008e6000) Stream removed, broadcasting: 3\nI0506 18:29:58.349301    2722 log.go:172] (0xc0003e1e40) (0xc0006ed540) Stream removed, broadcasting: 5\n"
May  6 18:29:58.354: INFO: stdout: ""
May  6 18:29:58.354: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:29:58.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4767" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:14.794 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":147,"skipped":2541,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:29:58.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: Gathering metrics
W0506 18:30:00.868603       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May  6 18:30:00.868: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:30:00.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6187" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":148,"skipped":2570,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:30:00.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:30:01.292: INFO: Creating deployment "webserver-deployment"
May  6 18:30:01.518: INFO: Waiting for observed generation 1
May  6 18:30:03.738: INFO: Waiting for all required pods to come up
May  6 18:30:04.472: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
May  6 18:30:19.013: INFO: Waiting for deployment "webserver-deployment" to complete
May  6 18:30:19.019: INFO: Updating deployment "webserver-deployment" with a non-existent image
May  6 18:30:19.027: INFO: Updating deployment webserver-deployment
May  6 18:30:19.027: INFO: Waiting for observed generation 2
May  6 18:30:21.350: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
May  6 18:30:21.352: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
May  6 18:30:21.683: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
May  6 18:30:21.690: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
May  6 18:30:21.690: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
May  6 18:30:21.692: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
May  6 18:30:21.696: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
May  6 18:30:21.696: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
May  6 18:30:21.704: INFO: Updating deployment webserver-deployment
May  6 18:30:21.704: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
May  6 18:30:22.034: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
May  6 18:30:22.333: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May  6 18:30:26.782: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-7418 /apis/apps/v1/namespaces/deployment-7418/deployments/webserver-deployment 98320657-60f7-4303-b6b6-f191347fea22 2065413 3 2020-05-06 18:30:01 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-06 18:30:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-06 18:30:23 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0064460c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-06 18:30:22 +0000 UTC,LastTransitionTime:2020-05-06 18:30:22 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-06 18:30:23 +0000 UTC,LastTransitionTime:2020-05-06 18:30:01 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

May  6 18:30:27.688: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-7418 /apis/apps/v1/namespaces/deployment-7418/replicasets/webserver-deployment-6676bcd6d4 13fdd700-f8e3-4f15-afac-f8df43eb6d0e 2065397 3 2020-05-06 18:30:19 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 98320657-60f7-4303-b6b6-f191347fea22 0xc004168bf7 0xc004168bf8}] []  [{kube-controller-manager Update apps/v1 2020-05-06 18:30:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 56 51 50 48 54 53 55 45 54 48 102 55 45 52 51 48 51 45 98 54 98 54 45 102 49 57 49 51 52 55 102 101 97 50 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004168c78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  6 18:30:27.689: INFO: All old ReplicaSets of Deployment "webserver-deployment":
May  6 18:30:27.689: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-7418 /apis/apps/v1/namespaces/deployment-7418/replicasets/webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 2065411 3 2020-05-06 18:30:01 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 98320657-60f7-4303-b6b6-f191347fea22 0xc004168cd7 0xc004168cd8}] []  [{kube-controller-manager Update apps/v1 2020-05-06 18:30:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 57 56 51 50 48 54 53 55 45 54 48 102 55 45 52 51 48 51 45 98 54 98 54 45 102 49 57 49 51 52 55 102 101 97 50 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004168d48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
May  6 18:30:27.948: INFO: Pod "webserver-deployment-6676bcd6d4-5ncr9" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-5ncr9 webserver-deployment-6676bcd6d4- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-6676bcd6d4-5ncr9 48e4d74c-1a40-469e-871a-301ab8013480 2065422 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13fdd700-f8e3-4f15-afac-f8df43eb6d0e 0xc0041692b7 0xc0041692b8}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 102 100 100 55 48 48 45 102 56 101 51 45 52 102 49 53 45 97 102 97 99 45 102 56 100 102 52 51 101 98 54 100 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:24 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-06 18:30:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.948: INFO: Pod "webserver-deployment-6676bcd6d4-5r7gl" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-5r7gl webserver-deployment-6676bcd6d4- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-6676bcd6d4-5r7gl b3e9e795-b18b-4a8d-928a-da1cf042ecc6 2065335 0 2020-05-06 18:30:20 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13fdd700-f8e3-4f15-afac-f8df43eb6d0e 0xc004169467 0xc004169468}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 102 100 100 55 48 48 45 102 56 101 51 45 52 102 49 53 45 97 102 97 99 45 102 56 100 102 52 51 101 98 54 100 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:21 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-06 18:30:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.948: INFO: Pod "webserver-deployment-6676bcd6d4-7kh65" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-7kh65 webserver-deployment-6676bcd6d4- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-6676bcd6d4-7kh65 6aa2a44e-0f9f-4d92-a174-eea456db7183 2065308 0 2020-05-06 18:30:19 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13fdd700-f8e3-4f15-afac-f8df43eb6d0e 0xc004169627 0xc004169628}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 102 100 100 55 48 48 45 102 56 101 51 45 52 102 49 53 45 97 102 97 99 45 102 56 100 102 52 51 101 98 54 100 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:19 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-06 18:30:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.949: INFO: Pod "webserver-deployment-6676bcd6d4-9w49b" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9w49b webserver-deployment-6676bcd6d4- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-6676bcd6d4-9w49b 8718453b-3449-43f7-a00b-b4e86c2aa41c 2065457 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13fdd700-f8e3-4f15-afac-f8df43eb6d0e 0xc0041697d7 0xc0041697d8}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 102 100 100 55 48 48 45 102 56 101 51 45 52 102 49 53 45 97 102 97 99 45 102 56 100 102 52 51 101 98 54 100 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:26 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-06 18:30:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.949: INFO: Pod "webserver-deployment-6676bcd6d4-cqhzg" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-cqhzg webserver-deployment-6676bcd6d4- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-6676bcd6d4-cqhzg 0b1411fd-fdc9-4dba-91e6-3af57a0b714a 2065390 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13fdd700-f8e3-4f15-afac-f8df43eb6d0e 0xc004169987 0xc004169988}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 102 100 100 55 48 48 45 102 56 101 51 45 52 102 49 53 45 97 102 97 99 45 102 56 100 102 52 51 101 98 54 100 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.949: INFO: Pod "webserver-deployment-6676bcd6d4-cqk7t" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-cqk7t webserver-deployment-6676bcd6d4- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-6676bcd6d4-cqk7t 947b4ecb-79d7-4643-831c-64653fb9cf09 2065420 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13fdd700-f8e3-4f15-afac-f8df43eb6d0e 0xc004169ac7 0xc004169ac8}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 102 100 100 55 48 48 45 102 56 101 51 45 52 102 49 53 45 97 102 97 99 45 102 56 100 102 52 51 101 98 54 100 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:24 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-06 18:30:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.950: INFO: Pod "webserver-deployment-6676bcd6d4-fzh5s" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-fzh5s webserver-deployment-6676bcd6d4- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-6676bcd6d4-fzh5s e3ed0775-c459-478d-9d5e-a930d652e1f3 2065453 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13fdd700-f8e3-4f15-afac-f8df43eb6d0e 0xc004169c77 0xc004169c78}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 102 100 100 55 48 48 45 102 56 101 51 45 52 102 49 53 45 97 102 97 99 45 102 56 100 102 52 51 101 98 54 100 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:26 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-06 18:30:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.950: INFO: Pod "webserver-deployment-6676bcd6d4-gghh4" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-gghh4 webserver-deployment-6676bcd6d4- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-6676bcd6d4-gghh4 0125ee0e-7d0b-45d0-a291-f724453f539b 2065333 0 2020-05-06 18:30:20 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13fdd700-f8e3-4f15-afac-f8df43eb6d0e 0xc004169e27 0xc004169e28}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 102 100 100 55 48 48 45 102 56 101 51 45 52 102 49 53 45 97 102 97 99 45 102 56 100 102 52 51 101 98 54 100 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:20 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-06 18:30:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.950: INFO: Pod "webserver-deployment-6676bcd6d4-kqrsp" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-kqrsp webserver-deployment-6676bcd6d4- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-6676bcd6d4-kqrsp aada8170-b479-4339-b515-4d340c5099b7 2065398 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13fdd700-f8e3-4f15-afac-f8df43eb6d0e 0xc004169fd7 0xc004169fd8}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 102 100 100 55 48 48 45 102 56 101 51 45 52 102 49 53 45 97 102 97 99 45 102 56 100 102 52 51 101 98 54 100 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:23 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-06 18:30:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.951: INFO: Pod "webserver-deployment-6676bcd6d4-rsmcn" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rsmcn webserver-deployment-6676bcd6d4- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-6676bcd6d4-rsmcn 7ff3eda6-3108-4d5d-8fe2-46274e70c94a 2065316 0 2020-05-06 18:30:19 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13fdd700-f8e3-4f15-afac-f8df43eb6d0e 0xc005d92187 0xc005d92188}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 102 100 100 55 48 48 45 102 56 101 51 45 52 102 49 53 45 97 102 97 99 45 102 56 100 102 52 51 101 98 54 100 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:19 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-06 18:30:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.951: INFO: Pod "webserver-deployment-6676bcd6d4-vsz4k" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-vsz4k webserver-deployment-6676bcd6d4- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-6676bcd6d4-vsz4k 9b09b236-c8b1-43e4-8c83-ee2e9d324c06 2065381 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13fdd700-f8e3-4f15-afac-f8df43eb6d0e 0xc005d92347 0xc005d92348}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 102 100 100 55 48 48 45 102 56 101 51 45 52 102 49 53 45 97 102 97 99 45 102 56 100 102 52 51 101 98 54 100 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.951: INFO: Pod "webserver-deployment-6676bcd6d4-wp2th" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-wp2th webserver-deployment-6676bcd6d4- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-6676bcd6d4-wp2th 0a16986c-bc25-4aca-8800-124347c80e42 2065393 0 2020-05-06 18:30:23 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13fdd700-f8e3-4f15-afac-f8df43eb6d0e 0xc005d92497 0xc005d92498}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 102 100 100 55 48 48 45 102 56 101 51 45 52 102 49 53 45 97 102 97 99 45 102 56 100 102 52 51 101 98 54 100 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.951: INFO: Pod "webserver-deployment-6676bcd6d4-zzrkm" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-zzrkm webserver-deployment-6676bcd6d4- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-6676bcd6d4-zzrkm d4eb207d-854e-4cbc-a686-0462ba059ab8 2065309 0 2020-05-06 18:30:19 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 13fdd700-f8e3-4f15-afac-f8df43eb6d0e 0xc005d925d7 0xc005d925d8}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 51 102 100 100 55 48 48 45 102 56 101 51 45 52 102 49 53 45 97 102 97 99 45 102 56 100 102 52 51 101 98 54 100 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:19 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-06 18:30:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.952: INFO: Pod "webserver-deployment-84855cf797-75r54" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-75r54 webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-75r54 e20ee8c4-8386-4bb6-8713-698db0f583fa 2065208 0 2020-05-06 18:30:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d92787 0xc005d92788}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:12 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 55 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.78,StartTime:2020-05-06 18:30:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 18:30:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4fd00aa1e0ecd90482e87c830ef6a227ce8deaeebd4e48d466667fa223046d57,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.78,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.952: INFO: Pod "webserver-deployment-84855cf797-7dfrc" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-7dfrc webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-7dfrc 069101fb-6356-497c-8410-09ec0cdf2a6a 2065386 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d92937 0xc005d92938}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.952: INFO: Pod "webserver-deployment-84855cf797-9xr5h" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-9xr5h webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-9xr5h 15deded3-ff63-4c24-9d7f-fbc186915fe5 2065389 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d92a67 0xc005d92a68}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.952: INFO: Pod "webserver-deployment-84855cf797-c5bw9" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-c5bw9 webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-c5bw9 24d165fc-a644-4ea4-a507-88cf937f325f 2065187 0 2020-05-06 18:30:01 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d92b97 0xc005d92b98}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 57 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.92,StartTime:2020-05-06 18:30:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 18:30:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c1be15526c3f0ac62e058246912b71bd910a7be2bca1257ea8f21bc838bf54a8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.953: INFO: Pod "webserver-deployment-84855cf797-c5dgx" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-c5dgx webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-c5dgx d9a4196f-f088-40f7-b757-4aa408febd2b 2065388 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d92d47 0xc005d92d48}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.953: INFO: Pod "webserver-deployment-84855cf797-dwt9s" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-dwt9s webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-dwt9s 068ef4e7-f8c6-4e9d-96e4-e53c0d0d9b08 2065434 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d92e77 0xc005d92e78}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:24 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-06 18:30:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.953: INFO: Pod "webserver-deployment-84855cf797-ffgtg" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-ffgtg webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-ffgtg 87ea74a9-a16c-4056-a0d0-f587a803e1dd 2065378 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d93007 0xc005d93008}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.954: INFO: Pod "webserver-deployment-84855cf797-kp227" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-kp227 webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-kp227 36db002c-0f5c-446e-95e2-8d76e71e977b 2065410 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d93137 0xc005d93138}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:23 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-06 18:30:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.954: INFO: Pod "webserver-deployment-84855cf797-kz78s" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-kz78s webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-kz78s b9dda569-77ac-4a69-aa52-6a432a663204 2065408 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d932c7 0xc005d932c8}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:23 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-06 18:30:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.954: INFO: Pod "webserver-deployment-84855cf797-lh4hk" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-lh4hk webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-lh4hk 4be40639-9b71-4eed-a957-a7973053d4b0 2065394 0 2020-05-06 18:30:21 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d93457 0xc005d93458}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:21 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:23 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-06 18:30:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.955: INFO: Pod "webserver-deployment-84855cf797-mgxct" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-mgxct webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-mgxct c90ede5c-f353-4a41-aa26-d5baa0557224 2065218 0 2020-05-06 18:30:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d935e7 0xc005d935e8}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:14 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 57 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.93,StartTime:2020-05-06 18:30:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 18:30:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c40c1c7052bbd0a996696e7782bf29714c34087810d7f7d23bc5deab55511353,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.955: INFO: Pod "webserver-deployment-84855cf797-ndncg" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-ndncg webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-ndncg 1dc27024-8b13-4c17-8514-521711e4261a 2065430 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d93797 0xc005d93798}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:24 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-06 18:30:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.955: INFO: Pod "webserver-deployment-84855cf797-pmgt5" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-pmgt5 webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-pmgt5 025da7bf-192d-45e1-8836-f114799a0dc6 2065439 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d93927 0xc005d93928}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:24 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-06 18:30:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.955: INFO: Pod "webserver-deployment-84855cf797-q5pkh" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-q5pkh webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-q5pkh 2e9d7981-2fca-4c24-8860-60f7d42e418a 2065265 0 2020-05-06 18:30:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d93ab7 0xc005d93ab8}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:17 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 56 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.81,StartTime:2020-05-06 18:30:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 18:30:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c2c6a39213de0a2434806d2f31a07741c51b3db0f7df24be6c4c37767cf5b29f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.81,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.956: INFO: Pod "webserver-deployment-84855cf797-qkd9p" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-qkd9p webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-qkd9p cd64d754-827a-48e7-a818-0fdeac55da82 2065237 0 2020-05-06 18:30:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d93c67 0xc005d93c68}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:16 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 57 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.94,StartTime:2020-05-06 18:30:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 18:30:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d3dc2b2c8a9bcf4aa052e463ab3db531996090400feda001aee4fbdca10318ab,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.956: INFO: Pod "webserver-deployment-84855cf797-rhc7g" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-rhc7g webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-rhc7g ff07af39-73d2-4fb0-8041-bf45e8cdc4d5 2065443 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d93e17 0xc005d93e18}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:25 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-06 18:30:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.956: INFO: Pod "webserver-deployment-84855cf797-sd8s8" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-sd8s8 webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-sd8s8 4a17905a-85a0-4280-96d0-94c676e50b21 2065224 0 2020-05-06 18:30:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d93fa7 0xc005d93fa8}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:14 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 55 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.79,StartTime:2020-05-06 18:30:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 18:30:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://980cbef253774f465893f707ac80bed2d60ac661a787087e7a5e9969458ec583,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.79,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.957: INFO: Pod "webserver-deployment-84855cf797-tb7sb" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-tb7sb webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-tb7sb 70707705-e60f-4d75-bd48-f1d8942d5bc1 2065219 0 2020-05-06 18:30:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d1a157 0xc005d1a158}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:14 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 56 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.80,StartTime:2020-05-06 18:30:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 18:30:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f17d9d4f50ec271ea6666559e1ad2a533ed89d823ac7f3edfeb63634233ce70d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.957: INFO: Pod "webserver-deployment-84855cf797-tnb5t" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-tnb5t webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-tnb5t 222dd492-86c2-4762-9410-abfc7399fed8 2065387 0 2020-05-06 18:30:22 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d1a317 0xc005d1a318}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:22 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:30:27.957: INFO: Pod "webserver-deployment-84855cf797-vrhm4" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-vrhm4 webserver-deployment-84855cf797- deployment-7418 /api/v1/namespaces/deployment-7418/pods/webserver-deployment-84855cf797-vrhm4 a05583ce-3ea8-4ca8-8007-5b6433e4868b 2065261 0 2020-05-06 18:30:02 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c2905d59-2412-4593-bd1f-b489f3d79447 0xc005d1a457 0xc005d1a458}] []  [{kube-controller-manager Update v1 2020-05-06 18:30:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 57 48 53 100 53 57 45 50 52 49 50 45 52 53 57 51 45 98 100 49 102 45 98 52 56 57 102 51 100 55 57 52 52 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:30:16 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 56 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5rtbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5rtbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5rtbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:30:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.82,StartTime:2020-05-06 18:30:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 18:30:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2ba3dd0de586d35e9453a6ab7d8de834a9010391a2de852fcc89f2d143d7334c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.82,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:30:27.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7418" for this suite.

• [SLOW TEST:28.004 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":149,"skipped":2579,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:30:28.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
May  6 18:30:30.302: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:30:30.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5338" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":150,"skipped":2655,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:30:30.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-secret-l488
STEP: Creating a pod to test atomic-volume-subpath
May  6 18:30:31.434: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-l488" in namespace "subpath-7079" to be "Succeeded or Failed"
May  6 18:30:31.490: INFO: Pod "pod-subpath-test-secret-l488": Phase="Pending", Reason="", readiness=false. Elapsed: 55.626802ms
May  6 18:30:33.801: INFO: Pod "pod-subpath-test-secret-l488": Phase="Pending", Reason="", readiness=false. Elapsed: 2.366929137s
May  6 18:30:36.190: INFO: Pod "pod-subpath-test-secret-l488": Phase="Pending", Reason="", readiness=false. Elapsed: 4.75544101s
May  6 18:30:38.279: INFO: Pod "pod-subpath-test-secret-l488": Phase="Pending", Reason="", readiness=false. Elapsed: 6.84460196s
May  6 18:30:40.460: INFO: Pod "pod-subpath-test-secret-l488": Phase="Pending", Reason="", readiness=false. Elapsed: 9.025521249s
May  6 18:30:43.018: INFO: Pod "pod-subpath-test-secret-l488": Phase="Pending", Reason="", readiness=false. Elapsed: 11.58378179s
May  6 18:30:45.057: INFO: Pod "pod-subpath-test-secret-l488": Phase="Pending", Reason="", readiness=false. Elapsed: 13.622944313s
May  6 18:30:47.812: INFO: Pod "pod-subpath-test-secret-l488": Phase="Running", Reason="", readiness=true. Elapsed: 16.37739099s
May  6 18:30:49.988: INFO: Pod "pod-subpath-test-secret-l488": Phase="Running", Reason="", readiness=true. Elapsed: 18.553774007s
May  6 18:30:52.100: INFO: Pod "pod-subpath-test-secret-l488": Phase="Running", Reason="", readiness=true. Elapsed: 20.665779394s
May  6 18:30:54.569: INFO: Pod "pod-subpath-test-secret-l488": Phase="Running", Reason="", readiness=true. Elapsed: 23.134834971s
May  6 18:30:57.441: INFO: Pod "pod-subpath-test-secret-l488": Phase="Running", Reason="", readiness=true. Elapsed: 26.00640993s
May  6 18:30:59.870: INFO: Pod "pod-subpath-test-secret-l488": Phase="Running", Reason="", readiness=true. Elapsed: 28.436012034s
May  6 18:31:01.914: INFO: Pod "pod-subpath-test-secret-l488": Phase="Running", Reason="", readiness=true. Elapsed: 30.479459287s
May  6 18:31:04.429: INFO: Pod "pod-subpath-test-secret-l488": Phase="Running", Reason="", readiness=true. Elapsed: 32.994671474s
May  6 18:31:06.461: INFO: Pod "pod-subpath-test-secret-l488": Phase="Running", Reason="", readiness=true. Elapsed: 35.026104361s
May  6 18:31:08.639: INFO: Pod "pod-subpath-test-secret-l488": Phase="Running", Reason="", readiness=true. Elapsed: 37.204394888s
May  6 18:31:10.650: INFO: Pod "pod-subpath-test-secret-l488": Phase="Succeeded", Reason="", readiness=false. Elapsed: 39.215835024s
STEP: Saw pod success
May  6 18:31:10.650: INFO: Pod "pod-subpath-test-secret-l488" satisfied condition "Succeeded or Failed"
May  6 18:31:10.664: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-secret-l488 container test-container-subpath-secret-l488: 
STEP: delete the pod
May  6 18:31:10.868: INFO: Waiting for pod pod-subpath-test-secret-l488 to disappear
May  6 18:31:10.874: INFO: Pod pod-subpath-test-secret-l488 no longer exists
STEP: Deleting pod pod-subpath-test-secret-l488
May  6 18:31:10.874: INFO: Deleting pod "pod-subpath-test-secret-l488" in namespace "subpath-7079"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:31:10.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7079" for this suite.

• [SLOW TEST:40.452 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":151,"skipped":2667,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:31:10.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-5c150e7d-e09b-415c-ae41-f05f8be5e169
STEP: Creating a pod to test consume configMaps
May  6 18:31:11.246: INFO: Waiting up to 5m0s for pod "pod-configmaps-a12e1db5-7c7a-4abb-8cdf-25b76000aafd" in namespace "configmap-9703" to be "Succeeded or Failed"
May  6 18:31:11.287: INFO: Pod "pod-configmaps-a12e1db5-7c7a-4abb-8cdf-25b76000aafd": Phase="Pending", Reason="", readiness=false. Elapsed: 40.915172ms
May  6 18:31:13.638: INFO: Pod "pod-configmaps-a12e1db5-7c7a-4abb-8cdf-25b76000aafd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.391956683s
May  6 18:31:16.609: INFO: Pod "pod-configmaps-a12e1db5-7c7a-4abb-8cdf-25b76000aafd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.362879607s
STEP: Saw pod success
May  6 18:31:16.609: INFO: Pod "pod-configmaps-a12e1db5-7c7a-4abb-8cdf-25b76000aafd" satisfied condition "Succeeded or Failed"
May  6 18:31:16.612: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-a12e1db5-7c7a-4abb-8cdf-25b76000aafd container configmap-volume-test: 
STEP: delete the pod
May  6 18:31:17.119: INFO: Waiting for pod pod-configmaps-a12e1db5-7c7a-4abb-8cdf-25b76000aafd to disappear
May  6 18:31:17.149: INFO: Pod pod-configmaps-a12e1db5-7c7a-4abb-8cdf-25b76000aafd no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:31:17.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9703" for this suite.

• [SLOW TEST:6.347 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2681,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:31:17.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May  6 18:31:21.778: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:31:21.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4480" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2712,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:31:21.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  6 18:31:22.056: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0ef91e5-28c9-487c-a077-9ce937954ed2" in namespace "downward-api-7722" to be "Succeeded or Failed"
May  6 18:31:22.069: INFO: Pod "downwardapi-volume-d0ef91e5-28c9-487c-a077-9ce937954ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.395581ms
May  6 18:31:24.074: INFO: Pod "downwardapi-volume-d0ef91e5-28c9-487c-a077-9ce937954ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017931893s
May  6 18:31:26.097: INFO: Pod "downwardapi-volume-d0ef91e5-28c9-487c-a077-9ce937954ed2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04122058s
May  6 18:31:28.127: INFO: Pod "downwardapi-volume-d0ef91e5-28c9-487c-a077-9ce937954ed2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070435693s
STEP: Saw pod success
May  6 18:31:28.127: INFO: Pod "downwardapi-volume-d0ef91e5-28c9-487c-a077-9ce937954ed2" satisfied condition "Succeeded or Failed"
May  6 18:31:28.129: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-d0ef91e5-28c9-487c-a077-9ce937954ed2 container client-container: 
STEP: delete the pod
May  6 18:31:28.388: INFO: Waiting for pod downwardapi-volume-d0ef91e5-28c9-487c-a077-9ce937954ed2 to disappear
May  6 18:31:28.507: INFO: Pod downwardapi-volume-d0ef91e5-28c9-487c-a077-9ce937954ed2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:31:28.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7722" for this suite.

• [SLOW TEST:6.519 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2718,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:31:28.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-1293
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
May  6 18:31:29.512: INFO: Found 0 stateful pods, waiting for 3
May  6 18:31:39.527: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May  6 18:31:39.527: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May  6 18:31:39.527: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
May  6 18:31:49.621: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May  6 18:31:49.621: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May  6 18:31:49.621: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
May  6 18:31:49.651: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
May  6 18:32:00.226: INFO: Updating stateful set ss2
May  6 18:32:00.311: INFO: Waiting for Pod statefulset-1293/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  6 18:32:10.320: INFO: Waiting for Pod statefulset-1293/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
May  6 18:32:20.452: INFO: Found 2 stateful pods, waiting for 3
May  6 18:32:30.457: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May  6 18:32:30.457: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May  6 18:32:30.457: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
May  6 18:32:40.507: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May  6 18:32:40.507: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May  6 18:32:40.507: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
May  6 18:32:40.882: INFO: Updating stateful set ss2
May  6 18:32:41.041: INFO: Waiting for Pod statefulset-1293/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  6 18:32:51.295: INFO: Waiting for Pod statefulset-1293/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  6 18:33:01.066: INFO: Updating stateful set ss2
May  6 18:33:01.107: INFO: Waiting for StatefulSet statefulset-1293/ss2 to complete update
May  6 18:33:01.107: INFO: Waiting for Pod statefulset-1293/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May  6 18:33:11.123: INFO: Deleting all statefulset in ns statefulset-1293
May  6 18:33:11.126: INFO: Scaling statefulset ss2 to 0
May  6 18:33:41.284: INFO: Waiting for statefulset status.replicas updated to 0
May  6 18:33:41.287: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:33:41.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1293" for this suite.

• [SLOW TEST:133.152 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":155,"skipped":2730,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:33:41.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May  6 18:33:42.227: INFO: Waiting up to 5m0s for pod "downward-api-adf3c7e9-6558-47f4-9852-987324ea75f7" in namespace "downward-api-9512" to be "Succeeded or Failed"
May  6 18:33:42.394: INFO: Pod "downward-api-adf3c7e9-6558-47f4-9852-987324ea75f7": Phase="Pending", Reason="", readiness=false. Elapsed: 166.411203ms
May  6 18:33:44.398: INFO: Pod "downward-api-adf3c7e9-6558-47f4-9852-987324ea75f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170800792s
May  6 18:33:46.406: INFO: Pod "downward-api-adf3c7e9-6558-47f4-9852-987324ea75f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.178932718s
STEP: Saw pod success
May  6 18:33:46.407: INFO: Pod "downward-api-adf3c7e9-6558-47f4-9852-987324ea75f7" satisfied condition "Succeeded or Failed"
May  6 18:33:46.409: INFO: Trying to get logs from node kali-worker pod downward-api-adf3c7e9-6558-47f4-9852-987324ea75f7 container dapi-container: 
STEP: delete the pod
May  6 18:33:46.569: INFO: Waiting for pod downward-api-adf3c7e9-6558-47f4-9852-987324ea75f7 to disappear
May  6 18:33:46.638: INFO: Pod downward-api-adf3c7e9-6558-47f4-9852-987324ea75f7 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:33:46.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9512" for this suite.

• [SLOW TEST:5.207 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2746,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:33:46.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
May  6 18:33:47.606: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
May  6 18:33:49.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386827, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386827, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386827, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386827, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:33:51.621: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386827, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386827, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386827, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386827, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  6 18:33:54.648: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:33:54.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:33:55.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-5692" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:9.163 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":157,"skipped":2767,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:33:56.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:33:56.168: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:33:57.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2635" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":275,"completed":158,"skipped":2780,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:33:57.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-8c597a5c-9b7a-47e8-b51d-6a3ab7ba015f
STEP: Creating a pod to test consume configMaps
May  6 18:33:57.695: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f2b49433-cc9a-402e-9f06-14e4646038a4" in namespace "projected-9152" to be "Succeeded or Failed"
May  6 18:33:57.699: INFO: Pod "pod-projected-configmaps-f2b49433-cc9a-402e-9f06-14e4646038a4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.419417ms
May  6 18:33:59.703: INFO: Pod "pod-projected-configmaps-f2b49433-cc9a-402e-9f06-14e4646038a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007570963s
May  6 18:34:01.746: INFO: Pod "pod-projected-configmaps-f2b49433-cc9a-402e-9f06-14e4646038a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050662956s
STEP: Saw pod success
May  6 18:34:01.746: INFO: Pod "pod-projected-configmaps-f2b49433-cc9a-402e-9f06-14e4646038a4" satisfied condition "Succeeded or Failed"
May  6 18:34:01.749: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-f2b49433-cc9a-402e-9f06-14e4646038a4 container projected-configmap-volume-test: 
STEP: delete the pod
May  6 18:34:02.156: INFO: Waiting for pod pod-projected-configmaps-f2b49433-cc9a-402e-9f06-14e4646038a4 to disappear
May  6 18:34:02.411: INFO: Pod pod-projected-configmaps-f2b49433-cc9a-402e-9f06-14e4646038a4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:34:02.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9152" for this suite.

• [SLOW TEST:5.123 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2801,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:34:02.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-3d98dea6-92ae-4dd3-b0ee-164b89464ddc in namespace container-probe-4564
May  6 18:34:09.143: INFO: Started pod liveness-3d98dea6-92ae-4dd3-b0ee-164b89464ddc in namespace container-probe-4564
STEP: checking the pod's current state and verifying that restartCount is present
May  6 18:34:09.149: INFO: Initial restart count of pod liveness-3d98dea6-92ae-4dd3-b0ee-164b89464ddc is 0
May  6 18:34:29.288: INFO: Restart count of pod container-probe-4564/liveness-3d98dea6-92ae-4dd3-b0ee-164b89464ddc is now 1 (20.139405931s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:34:29.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4564" for this suite.

• [SLOW TEST:26.918 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2815,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:34:29.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-a908dd8c-4d4b-4b42-944e-7685bd8945ad
STEP: Creating a pod to test consume secrets
May  6 18:34:29.629: INFO: Waiting up to 5m0s for pod "pod-secrets-72b78e3e-2d7e-47e3-a488-873c49478c05" in namespace "secrets-936" to be "Succeeded or Failed"
May  6 18:34:29.700: INFO: Pod "pod-secrets-72b78e3e-2d7e-47e3-a488-873c49478c05": Phase="Pending", Reason="", readiness=false. Elapsed: 70.191881ms
May  6 18:34:31.736: INFO: Pod "pod-secrets-72b78e3e-2d7e-47e3-a488-873c49478c05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106566267s
May  6 18:34:33.740: INFO: Pod "pod-secrets-72b78e3e-2d7e-47e3-a488-873c49478c05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110670224s
May  6 18:34:36.146: INFO: Pod "pod-secrets-72b78e3e-2d7e-47e3-a488-873c49478c05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.516579528s
STEP: Saw pod success
May  6 18:34:36.146: INFO: Pod "pod-secrets-72b78e3e-2d7e-47e3-a488-873c49478c05" satisfied condition "Succeeded or Failed"
May  6 18:34:36.149: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-72b78e3e-2d7e-47e3-a488-873c49478c05 container secret-volume-test: 
STEP: delete the pod
May  6 18:34:37.034: INFO: Waiting for pod pod-secrets-72b78e3e-2d7e-47e3-a488-873c49478c05 to disappear
May  6 18:34:37.370: INFO: Pod pod-secrets-72b78e3e-2d7e-47e3-a488-873c49478c05 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:34:37.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-936" for this suite.

• [SLOW TEST:8.084 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":161,"skipped":2848,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:34:37.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  6 18:34:39.131: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f380a22f-3206-439d-93b2-1418fbf463f8" in namespace "downward-api-4595" to be "Succeeded or Failed"
May  6 18:34:39.207: INFO: Pod "downwardapi-volume-f380a22f-3206-439d-93b2-1418fbf463f8": Phase="Pending", Reason="", readiness=false. Elapsed: 75.717074ms
May  6 18:34:41.220: INFO: Pod "downwardapi-volume-f380a22f-3206-439d-93b2-1418fbf463f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088905618s
May  6 18:34:43.346: INFO: Pod "downwardapi-volume-f380a22f-3206-439d-93b2-1418fbf463f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215317098s
May  6 18:34:45.350: INFO: Pod "downwardapi-volume-f380a22f-3206-439d-93b2-1418fbf463f8": Phase="Running", Reason="", readiness=true. Elapsed: 6.219037028s
May  6 18:34:47.354: INFO: Pod "downwardapi-volume-f380a22f-3206-439d-93b2-1418fbf463f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.222869069s
STEP: Saw pod success
May  6 18:34:47.354: INFO: Pod "downwardapi-volume-f380a22f-3206-439d-93b2-1418fbf463f8" satisfied condition "Succeeded or Failed"
May  6 18:34:47.357: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-f380a22f-3206-439d-93b2-1418fbf463f8 container client-container: 
STEP: delete the pod
May  6 18:34:47.392: INFO: Waiting for pod downwardapi-volume-f380a22f-3206-439d-93b2-1418fbf463f8 to disappear
May  6 18:34:47.400: INFO: Pod downwardapi-volume-f380a22f-3206-439d-93b2-1418fbf463f8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:34:47.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4595" for this suite.

• [SLOW TEST:9.980 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2870,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:34:47.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  6 18:34:47.480: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d03a1ef-260d-4c91-914c-0859aa6344a5" in namespace "downward-api-1547" to be "Succeeded or Failed"
May  6 18:34:47.495: INFO: Pod "downwardapi-volume-3d03a1ef-260d-4c91-914c-0859aa6344a5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.393567ms
May  6 18:34:49.500: INFO: Pod "downwardapi-volume-3d03a1ef-260d-4c91-914c-0859aa6344a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020302908s
May  6 18:34:51.504: INFO: Pod "downwardapi-volume-3d03a1ef-260d-4c91-914c-0859aa6344a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024574265s
May  6 18:34:53.508: INFO: Pod "downwardapi-volume-3d03a1ef-260d-4c91-914c-0859aa6344a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028027353s
STEP: Saw pod success
May  6 18:34:53.508: INFO: Pod "downwardapi-volume-3d03a1ef-260d-4c91-914c-0859aa6344a5" satisfied condition "Succeeded or Failed"
May  6 18:34:53.633: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-3d03a1ef-260d-4c91-914c-0859aa6344a5 container client-container: 
STEP: delete the pod
May  6 18:34:53.671: INFO: Waiting for pod downwardapi-volume-3d03a1ef-260d-4c91-914c-0859aa6344a5 to disappear
May  6 18:34:53.687: INFO: Pod downwardapi-volume-3d03a1ef-260d-4c91-914c-0859aa6344a5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:34:53.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1547" for this suite.

• [SLOW TEST:6.288 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2890,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:34:53.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-gjsk8 in namespace proxy-887
I0506 18:34:53.921750       7 runners.go:190] Created replication controller with name: proxy-service-gjsk8, namespace: proxy-887, replica count: 1
I0506 18:34:54.972244       7 runners.go:190] proxy-service-gjsk8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:34:55.972394       7 runners.go:190] proxy-service-gjsk8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:34:56.972593       7 runners.go:190] proxy-service-gjsk8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:34:57.972854       7 runners.go:190] proxy-service-gjsk8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:34:58.973064       7 runners.go:190] proxy-service-gjsk8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0506 18:34:59.973339       7 runners.go:190] proxy-service-gjsk8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0506 18:35:00.973550       7 runners.go:190] proxy-service-gjsk8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0506 18:35:01.973740       7 runners.go:190] proxy-service-gjsk8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0506 18:35:02.973938       7 runners.go:190] proxy-service-gjsk8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0506 18:35:03.974197       7 runners.go:190] proxy-service-gjsk8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0506 18:35:04.974406       7 runners.go:190] proxy-service-gjsk8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0506 18:35:05.974588       7 runners.go:190] proxy-service-gjsk8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  6 18:35:05.977: INFO: setup took 12.199088219s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
May  6 18:35:06.024: INFO: (0) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:1080/proxy/: t... (200; 46.83113ms)
May  6 18:35:06.024: INFO: (0) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:1080/proxy/: testtest (200; 46.850053ms)
May  6 18:35:06.024: INFO: (0) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 46.971616ms)
May  6 18:35:06.024: INFO: (0) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 46.879893ms)
May  6 18:35:06.025: INFO: (0) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname2/proxy/: bar (200; 48.322709ms)
May  6 18:35:06.025: INFO: (0) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname1/proxy/: foo (200; 48.396963ms)
May  6 18:35:06.026: INFO: (0) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname1/proxy/: foo (200; 48.682448ms)
May  6 18:35:06.028: INFO: (0) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname2/proxy/: bar (200; 51.461515ms)
May  6 18:35:06.028: INFO: (0) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 51.532628ms)
May  6 18:35:06.034: INFO: (0) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:460/proxy/: tls baz (200; 56.976564ms)
May  6 18:35:06.034: INFO: (0) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:462/proxy/: tls qux (200; 57.141776ms)
May  6 18:35:06.034: INFO: (0) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname2/proxy/: tls qux (200; 57.012292ms)
May  6 18:35:06.034: INFO: (0) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname1/proxy/: tls baz (200; 56.95591ms)
May  6 18:35:06.035: INFO: (0) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:443/proxy/: testtest (200; 4.105785ms)
May  6 18:35:06.039: INFO: (1) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname2/proxy/: bar (200; 4.209395ms)
May  6 18:35:06.039: INFO: (1) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname1/proxy/: tls baz (200; 4.225504ms)
May  6 18:35:06.039: INFO: (1) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:1080/proxy/: t... (200; 4.315831ms)
May  6 18:35:06.039: INFO: (1) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:460/proxy/: tls baz (200; 4.291257ms)
May  6 18:35:06.039: INFO: (1) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 4.238464ms)
May  6 18:35:06.040: INFO: (1) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname1/proxy/: foo (200; 4.642084ms)
May  6 18:35:06.040: INFO: (1) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname2/proxy/: bar (200; 4.973044ms)
May  6 18:35:06.040: INFO: (1) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname2/proxy/: tls qux (200; 5.025326ms)
May  6 18:35:06.041: INFO: (1) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname1/proxy/: foo (200; 6.03336ms)
May  6 18:35:06.043: INFO: (2) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:1080/proxy/: testt... (200; 3.254199ms)
May  6 18:35:06.044: INFO: (2) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 3.410412ms)
May  6 18:35:06.045: INFO: (2) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc/proxy/: test (200; 3.841108ms)
May  6 18:35:06.045: INFO: (2) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:462/proxy/: tls qux (200; 3.919254ms)
May  6 18:35:06.045: INFO: (2) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 3.925337ms)
May  6 18:35:06.045: INFO: (2) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 4.018533ms)
May  6 18:35:06.045: INFO: (2) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname1/proxy/: tls baz (200; 4.234871ms)
May  6 18:35:06.045: INFO: (2) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname2/proxy/: bar (200; 4.202797ms)
May  6 18:35:06.046: INFO: (2) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname1/proxy/: foo (200; 4.796139ms)
May  6 18:35:06.046: INFO: (2) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname2/proxy/: tls qux (200; 4.797242ms)
May  6 18:35:06.046: INFO: (2) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname2/proxy/: bar (200; 4.837409ms)
May  6 18:35:06.046: INFO: (2) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 4.935108ms)
May  6 18:35:06.046: INFO: (2) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname1/proxy/: foo (200; 5.14408ms)
May  6 18:35:06.060: INFO: (3) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:460/proxy/: tls baz (200; 13.952695ms)
May  6 18:35:06.060: INFO: (3) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 13.95578ms)
May  6 18:35:06.061: INFO: (3) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:443/proxy/: test (200; 14.557614ms)
May  6 18:35:06.061: INFO: (3) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:1080/proxy/: testt... (200; 14.650056ms)
May  6 18:35:06.061: INFO: (3) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 14.719473ms)
May  6 18:35:06.061: INFO: (3) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:462/proxy/: tls qux (200; 14.662354ms)
May  6 18:35:06.061: INFO: (3) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 14.655582ms)
May  6 18:35:06.061: INFO: (3) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 14.761984ms)
May  6 18:35:06.063: INFO: (3) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname1/proxy/: tls baz (200; 16.292508ms)
May  6 18:35:06.063: INFO: (3) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname2/proxy/: bar (200; 16.28033ms)
May  6 18:35:06.063: INFO: (3) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname1/proxy/: foo (200; 16.268628ms)
May  6 18:35:06.063: INFO: (3) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname1/proxy/: foo (200; 16.351019ms)
May  6 18:35:06.063: INFO: (3) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname2/proxy/: tls qux (200; 16.310312ms)
May  6 18:35:06.063: INFO: (3) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname2/proxy/: bar (200; 16.892148ms)
May  6 18:35:06.079: INFO: (4) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc/proxy/: test (200; 15.293674ms)
May  6 18:35:06.079: INFO: (4) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:443/proxy/: testt... (200; 17.234603ms)
May  6 18:35:06.080: INFO: (4) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 17.244584ms)
May  6 18:35:06.080: INFO: (4) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname1/proxy/: foo (200; 17.158882ms)
May  6 18:35:06.081: INFO: (4) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 17.432445ms)
May  6 18:35:06.081: INFO: (4) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:460/proxy/: tls baz (200; 17.54228ms)
May  6 18:35:06.085: INFO: (5) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:460/proxy/: tls baz (200; 4.470921ms)
May  6 18:35:06.086: INFO: (5) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:443/proxy/: test (200; 4.654655ms)
May  6 18:35:06.086: INFO: (5) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname2/proxy/: bar (200; 4.790355ms)
May  6 18:35:06.087: INFO: (5) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:462/proxy/: tls qux (200; 5.742158ms)
May  6 18:35:06.087: INFO: (5) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 5.849721ms)
May  6 18:35:06.087: INFO: (5) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:1080/proxy/: t... (200; 5.856899ms)
May  6 18:35:06.087: INFO: (5) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 5.866376ms)
May  6 18:35:06.087: INFO: (5) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname1/proxy/: foo (200; 5.819726ms)
May  6 18:35:06.087: INFO: (5) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:1080/proxy/: testtest (200; 2.736321ms)
May  6 18:35:06.092: INFO: (6) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:1080/proxy/: t... (200; 4.303852ms)
May  6 18:35:06.092: INFO: (6) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:462/proxy/: tls qux (200; 4.540564ms)
May  6 18:35:06.093: INFO: (6) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 4.87988ms)
May  6 18:35:06.093: INFO: (6) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 4.96596ms)
May  6 18:35:06.093: INFO: (6) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:460/proxy/: tls baz (200; 5.000711ms)
May  6 18:35:06.093: INFO: (6) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:1080/proxy/: testtesttest (200; 4.262039ms)
May  6 18:35:06.105: INFO: (7) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 4.223469ms)
May  6 18:35:06.105: INFO: (7) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname1/proxy/: tls baz (200; 4.363672ms)
May  6 18:35:06.105: INFO: (7) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname1/proxy/: foo (200; 4.355797ms)
May  6 18:35:06.105: INFO: (7) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:460/proxy/: tls baz (200; 4.551739ms)
May  6 18:35:06.105: INFO: (7) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname2/proxy/: tls qux (200; 4.388697ms)
May  6 18:35:06.105: INFO: (7) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname2/proxy/: bar (200; 4.590364ms)
May  6 18:35:06.105: INFO: (7) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:1080/proxy/: t... (200; 4.453328ms)
May  6 18:35:06.106: INFO: (7) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:462/proxy/: tls qux (200; 4.886336ms)
May  6 18:35:06.109: INFO: (8) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 2.806408ms)
May  6 18:35:06.109: INFO: (8) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:460/proxy/: tls baz (200; 3.112832ms)
May  6 18:35:06.109: INFO: (8) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 3.153549ms)
May  6 18:35:06.109: INFO: (8) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 3.49066ms)
May  6 18:35:06.109: INFO: (8) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:1080/proxy/: testtest (200; 3.55956ms)
May  6 18:35:06.109: INFO: (8) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname1/proxy/: foo (200; 3.759809ms)
May  6 18:35:06.109: INFO: (8) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 3.720536ms)
May  6 18:35:06.109: INFO: (8) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:462/proxy/: tls qux (200; 3.777729ms)
May  6 18:35:06.109: INFO: (8) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname1/proxy/: tls baz (200; 3.728046ms)
May  6 18:35:06.110: INFO: (8) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:443/proxy/: t... (200; 4.076255ms)
May  6 18:35:06.110: INFO: (8) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname2/proxy/: bar (200; 4.516772ms)
May  6 18:35:06.110: INFO: (8) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname1/proxy/: foo (200; 4.601086ms)
May  6 18:35:06.110: INFO: (8) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname2/proxy/: tls qux (200; 4.618197ms)
May  6 18:35:06.110: INFO: (8) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname2/proxy/: bar (200; 4.695641ms)
May  6 18:35:06.113: INFO: (9) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 2.423687ms)
May  6 18:35:06.113: INFO: (9) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc/proxy/: test (200; 2.903648ms)
May  6 18:35:06.113: INFO: (9) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:460/proxy/: tls baz (200; 2.885119ms)
May  6 18:35:06.114: INFO: (9) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:443/proxy/: testt... (200; 4.067724ms)
May  6 18:35:06.115: INFO: (9) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname2/proxy/: tls qux (200; 4.205835ms)
May  6 18:35:06.115: INFO: (9) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:462/proxy/: tls qux (200; 4.50516ms)
May  6 18:35:06.121: INFO: (10) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 5.579002ms)
May  6 18:35:06.121: INFO: (10) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 5.626689ms)
May  6 18:35:06.121: INFO: (10) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:1080/proxy/: t... (200; 5.657921ms)
May  6 18:35:06.121: INFO: (10) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 5.836077ms)
May  6 18:35:06.121: INFO: (10) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc/proxy/: test (200; 5.908878ms)
May  6 18:35:06.121: INFO: (10) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:462/proxy/: tls qux (200; 5.983145ms)
May  6 18:35:06.121: INFO: (10) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:1080/proxy/: testtest (200; 5.185117ms)
May  6 18:35:06.191: INFO: (11) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 5.650172ms)
May  6 18:35:06.192: INFO: (11) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname1/proxy/: tls baz (200; 6.024089ms)
May  6 18:35:06.192: INFO: (11) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname2/proxy/: tls qux (200; 6.137183ms)
May  6 18:35:06.192: INFO: (11) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:1080/proxy/: t... (200; 6.047435ms)
May  6 18:35:06.192: INFO: (11) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname1/proxy/: foo (200; 6.126706ms)
May  6 18:35:06.192: INFO: (11) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 6.182286ms)
May  6 18:35:06.192: INFO: (11) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname1/proxy/: foo (200; 6.188643ms)
May  6 18:35:06.192: INFO: (11) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:462/proxy/: tls qux (200; 6.365308ms)
May  6 18:35:06.192: INFO: (11) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 6.55401ms)
May  6 18:35:06.192: INFO: (11) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 7.012058ms)
May  6 18:35:06.192: INFO: (11) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:1080/proxy/: testtestt... (200; 6.372995ms)
May  6 18:35:06.199: INFO: (12) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname2/proxy/: bar (200; 6.48578ms)
May  6 18:35:06.199: INFO: (12) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname1/proxy/: tls baz (200; 6.474389ms)
May  6 18:35:06.199: INFO: (12) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 6.355575ms)
May  6 18:35:06.199: INFO: (12) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:443/proxy/: test (200; 6.567188ms)
May  6 18:35:06.199: INFO: (12) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname2/proxy/: tls qux (200; 6.402081ms)
May  6 18:35:06.199: INFO: (12) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 6.418993ms)
May  6 18:35:06.201: INFO: (12) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname1/proxy/: foo (200; 7.858757ms)
May  6 18:35:06.205: INFO: (13) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname1/proxy/: foo (200; 4.406055ms)
May  6 18:35:06.206: INFO: (13) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname2/proxy/: bar (200; 4.580409ms)
May  6 18:35:06.206: INFO: (13) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname2/proxy/: bar (200; 4.738918ms)
May  6 18:35:06.206: INFO: (13) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc/proxy/: test (200; 4.85675ms)
May  6 18:35:06.206: INFO: (13) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname1/proxy/: tls baz (200; 5.090127ms)
May  6 18:35:06.206: INFO: (13) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:443/proxy/: t... (200; 5.131078ms)
May  6 18:35:06.206: INFO: (13) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:462/proxy/: tls qux (200; 5.387511ms)
May  6 18:35:06.206: INFO: (13) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname2/proxy/: tls qux (200; 5.491516ms)
May  6 18:35:06.207: INFO: (13) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 5.701477ms)
May  6 18:35:06.207: INFO: (13) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:1080/proxy/: testtestt... (200; 5.234853ms)
May  6 18:35:06.213: INFO: (14) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname1/proxy/: foo (200; 5.348929ms)
May  6 18:35:06.213: INFO: (14) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname2/proxy/: tls qux (200; 5.901202ms)
May  6 18:35:06.213: INFO: (14) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname2/proxy/: bar (200; 5.903623ms)
May  6 18:35:06.213: INFO: (14) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname2/proxy/: bar (200; 5.955528ms)
May  6 18:35:06.213: INFO: (14) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname1/proxy/: tls baz (200; 5.985784ms)
May  6 18:35:06.213: INFO: (14) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:460/proxy/: tls baz (200; 5.989418ms)
May  6 18:35:06.213: INFO: (14) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 5.992699ms)
May  6 18:35:06.213: INFO: (14) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname1/proxy/: foo (200; 6.015595ms)
May  6 18:35:06.213: INFO: (14) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc/proxy/: test (200; 5.973991ms)
May  6 18:35:06.217: INFO: (15) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 3.319106ms)
May  6 18:35:06.217: INFO: (15) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:460/proxy/: tls baz (200; 3.762576ms)
May  6 18:35:06.217: INFO: (15) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:1080/proxy/: t... (200; 3.778671ms)
May  6 18:35:06.217: INFO: (15) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:1080/proxy/: testtest (200; 3.799885ms)
May  6 18:35:06.217: INFO: (15) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:443/proxy/: test (200; 4.087245ms)
May  6 18:35:06.223: INFO: (16) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:462/proxy/: tls qux (200; 4.1106ms)
May  6 18:35:06.223: INFO: (16) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname1/proxy/: foo (200; 4.16923ms)
May  6 18:35:06.223: INFO: (16) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname1/proxy/: foo (200; 4.268101ms)
May  6 18:35:06.223: INFO: (16) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname2/proxy/: bar (200; 4.331597ms)
May  6 18:35:06.223: INFO: (16) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname2/proxy/: bar (200; 4.233608ms)
May  6 18:35:06.223: INFO: (16) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 4.372602ms)
May  6 18:35:06.223: INFO: (16) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:1080/proxy/: testt... (200; 4.416759ms)
May  6 18:35:06.223: INFO: (16) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:160/proxy/: foo (200; 4.407181ms)
May  6 18:35:06.223: INFO: (16) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname1/proxy/: tls baz (200; 4.475469ms)
May  6 18:35:06.223: INFO: (16) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:443/proxy/: testtest (200; 5.000565ms)
May  6 18:35:06.228: INFO: (17) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:462/proxy/: tls qux (200; 4.957544ms)
May  6 18:35:06.228: INFO: (17) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:443/proxy/: t... (200; 5.076963ms)
May  6 18:35:06.231: INFO: (18) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:462/proxy/: tls qux (200; 2.527764ms)
May  6 18:35:06.231: INFO: (18) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc/proxy/: test (200; 2.7642ms)
May  6 18:35:06.232: INFO: (18) /api/v1/namespaces/proxy-887/pods/http:proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 3.267803ms)
May  6 18:35:06.232: INFO: (18) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:162/proxy/: bar (200; 3.312056ms)
May  6 18:35:06.232: INFO: (18) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:443/proxy/: t... (200; 4.052744ms)
May  6 18:35:06.233: INFO: (18) /api/v1/namespaces/proxy-887/services/http:proxy-service-gjsk8:portname1/proxy/: foo (200; 4.138955ms)
May  6 18:35:06.233: INFO: (18) /api/v1/namespaces/proxy-887/services/https:proxy-service-gjsk8:tlsportname2/proxy/: tls qux (200; 4.264156ms)
May  6 18:35:06.233: INFO: (18) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:460/proxy/: tls baz (200; 4.312451ms)
May  6 18:35:06.233: INFO: (18) /api/v1/namespaces/proxy-887/pods/proxy-service-gjsk8-j5ctc:1080/proxy/: testtestt... (200; 3.925458ms)
May  6 18:35:06.237: INFO: (19) /api/v1/namespaces/proxy-887/pods/https:proxy-service-gjsk8-j5ctc:443/proxy/: test (200; 5.048435ms)
May  6 18:35:06.238: INFO: (19) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname1/proxy/: foo (200; 5.095789ms)
May  6 18:35:06.238: INFO: (19) /api/v1/namespaces/proxy-887/services/proxy-service-gjsk8:portname2/proxy/: bar (200; 5.127724ms)
STEP: deleting ReplicationController proxy-service-gjsk8 in namespace proxy-887, will wait for the garbage collector to delete the pods
May  6 18:35:06.296: INFO: Deleting ReplicationController proxy-service-gjsk8 took: 5.720841ms
May  6 18:35:06.596: INFO: Terminating ReplicationController proxy-service-gjsk8 pods took: 300.360002ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:35:10.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-887" for this suite.

• [SLOW TEST:16.686 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":275,"completed":164,"skipped":2910,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:35:10.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:35:16.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2878" for this suite.

• [SLOW TEST:6.547 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":165,"skipped":2921,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:35:16.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-6240a6c6-d5ba-4c3a-a7bd-ffc649bc61b2
STEP: Creating a pod to test consume configMaps
May  6 18:35:17.533: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ab259ee9-7422-4abb-93b7-c336b545d3c5" in namespace "projected-5442" to be "Succeeded or Failed"
May  6 18:35:17.705: INFO: Pod "pod-projected-configmaps-ab259ee9-7422-4abb-93b7-c336b545d3c5": Phase="Pending", Reason="", readiness=false. Elapsed: 171.525984ms
May  6 18:35:19.711: INFO: Pod "pod-projected-configmaps-ab259ee9-7422-4abb-93b7-c336b545d3c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177695725s
May  6 18:35:21.715: INFO: Pod "pod-projected-configmaps-ab259ee9-7422-4abb-93b7-c336b545d3c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182217468s
May  6 18:35:23.720: INFO: Pod "pod-projected-configmaps-ab259ee9-7422-4abb-93b7-c336b545d3c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.186592234s
STEP: Saw pod success
May  6 18:35:23.720: INFO: Pod "pod-projected-configmaps-ab259ee9-7422-4abb-93b7-c336b545d3c5" satisfied condition "Succeeded or Failed"
May  6 18:35:23.723: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-ab259ee9-7422-4abb-93b7-c336b545d3c5 container projected-configmap-volume-test: 
STEP: delete the pod
May  6 18:35:23.935: INFO: Waiting for pod pod-projected-configmaps-ab259ee9-7422-4abb-93b7-c336b545d3c5 to disappear
May  6 18:35:24.137: INFO: Pod pod-projected-configmaps-ab259ee9-7422-4abb-93b7-c336b545d3c5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:35:24.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5442" for this suite.

• [SLOW TEST:7.299 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":166,"skipped":2949,"failed":0}
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:35:24.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
May  6 18:35:24.487: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3882 /api/v1/namespaces/watch-3882/configmaps/e2e-watch-test-configmap-a bef5541f-95a6-40f0-86e0-80a022752860 2067346 0 2020-05-06 18:35:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-06 18:35:24 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May  6 18:35:24.487: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3882 /api/v1/namespaces/watch-3882/configmaps/e2e-watch-test-configmap-a bef5541f-95a6-40f0-86e0-80a022752860 2067346 0 2020-05-06 18:35:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-06 18:35:24 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
May  6 18:35:34.495: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3882 /api/v1/namespaces/watch-3882/configmaps/e2e-watch-test-configmap-a bef5541f-95a6-40f0-86e0-80a022752860 2067384 0 2020-05-06 18:35:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-06 18:35:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
May  6 18:35:34.495: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3882 /api/v1/namespaces/watch-3882/configmaps/e2e-watch-test-configmap-a bef5541f-95a6-40f0-86e0-80a022752860 2067384 0 2020-05-06 18:35:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-06 18:35:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
May  6 18:35:44.505: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3882 /api/v1/namespaces/watch-3882/configmaps/e2e-watch-test-configmap-a bef5541f-95a6-40f0-86e0-80a022752860 2067414 0 2020-05-06 18:35:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-06 18:35:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May  6 18:35:44.505: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3882 /api/v1/namespaces/watch-3882/configmaps/e2e-watch-test-configmap-a bef5541f-95a6-40f0-86e0-80a022752860 2067414 0 2020-05-06 18:35:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-06 18:35:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
May  6 18:35:54.512: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3882 /api/v1/namespaces/watch-3882/configmaps/e2e-watch-test-configmap-a bef5541f-95a6-40f0-86e0-80a022752860 2067441 0 2020-05-06 18:35:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-06 18:35:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May  6 18:35:54.512: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3882 /api/v1/namespaces/watch-3882/configmaps/e2e-watch-test-configmap-a bef5541f-95a6-40f0-86e0-80a022752860 2067441 0 2020-05-06 18:35:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-06 18:35:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
May  6 18:36:04.520: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3882 /api/v1/namespaces/watch-3882/configmaps/e2e-watch-test-configmap-b b1295f6a-be3a-4158-8725-beb4afa9f550 2067471 0 2020-05-06 18:36:04 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-06 18:36:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May  6 18:36:04.520: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3882 /api/v1/namespaces/watch-3882/configmaps/e2e-watch-test-configmap-b b1295f6a-be3a-4158-8725-beb4afa9f550 2067471 0 2020-05-06 18:36:04 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-06 18:36:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
May  6 18:36:14.528: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3882 /api/v1/namespaces/watch-3882/configmaps/e2e-watch-test-configmap-b b1295f6a-be3a-4158-8725-beb4afa9f550 2067501 0 2020-05-06 18:36:04 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-06 18:36:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May  6 18:36:14.528: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3882 /api/v1/namespaces/watch-3882/configmaps/e2e-watch-test-configmap-b b1295f6a-be3a-4158-8725-beb4afa9f550 2067501 0 2020-05-06 18:36:04 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-06 18:36:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:36:24.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3882" for this suite.

• [SLOW TEST:60.337 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":167,"skipped":2951,"failed":0}
SSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:36:24.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:36:25.007: INFO: Pod name rollover-pod: Found 0 pods out of 1
May  6 18:36:30.011: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May  6 18:36:30.011: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
May  6 18:36:32.015: INFO: Creating deployment "test-rollover-deployment"
May  6 18:36:32.026: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
May  6 18:36:34.032: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
May  6 18:36:34.039: INFO: Ensure that both replica sets have 1 created replica
May  6 18:36:34.045: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
May  6 18:36:34.052: INFO: Updating deployment test-rollover-deployment
May  6 18:36:34.053: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
May  6 18:36:36.089: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
May  6 18:36:36.096: INFO: Make sure deployment "test-rollover-deployment" is complete
May  6 18:36:36.101: INFO: all replica sets need to contain the pod-template-hash label
May  6 18:36:36.101: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386994, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:36:38.109: INFO: all replica sets need to contain the pod-template-hash label
May  6 18:36:38.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386994, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:36:40.110: INFO: all replica sets need to contain the pod-template-hash label
May  6 18:36:40.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386998, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:36:42.109: INFO: all replica sets need to contain the pod-template-hash label
May  6 18:36:42.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386998, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:36:44.115: INFO: all replica sets need to contain the pod-template-hash label
May  6 18:36:44.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386998, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:36:46.291: INFO: all replica sets need to contain the pod-template-hash label
May  6 18:36:46.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386998, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:36:48.159: INFO: all replica sets need to contain the pod-template-hash label
May  6 18:36:48.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386998, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386992, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:36:50.108: INFO: 
May  6 18:36:50.108: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May  6 18:36:50.114: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-3114 /apis/apps/v1/namespaces/deployment-3114/deployments/test-rollover-deployment 78ced047-7d7e-492d-a3dc-ea297f39dad7 2067677 2 2020-05-06 18:36:32 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-06 18:36:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-06 18:36:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00615a658  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-06 18:36:32 +0000 UTC,LastTransitionTime:2020-05-06 18:36:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-05-06 18:36:48 +0000 UTC,LastTransitionTime:2020-05-06 18:36:32 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

May  6 18:36:50.117: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-3114 /apis/apps/v1/namespaces/deployment-3114/replicasets/test-rollover-deployment-84f7f6f64b 4e3e39cf-1bd7-4864-ab98-f5867795efc6 2067665 2 2020-05-06 18:36:34 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 78ced047-7d7e-492d-a3dc-ea297f39dad7 0xc00615ac87 0xc00615ac88}] []  [{kube-controller-manager Update apps/v1 2020-05-06 18:36:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 56 99 101 100 48 52 55 45 55 100 55 101 45 52 57 50 100 45 97 51 100 99 45 101 97 50 57 55 102 51 57 100 97 100 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00615ad18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May  6 18:36:50.117: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
May  6 18:36:50.117: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-3114 /apis/apps/v1/namespaces/deployment-3114/replicasets/test-rollover-controller 439b5100-b54e-46f2-a162-280513dec9fa 2067674 2 2020-05-06 18:36:24 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 78ced047-7d7e-492d-a3dc-ea297f39dad7 0xc00615aa77 0xc00615aa78}] []  [{e2e.test Update apps/v1 2020-05-06 18:36:24 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-06 18:36:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 56 99 101 100 48 52 55 45 55 100 55 101 45 52 57 50 100 45 97 51 100 99 45 101 97 50 57 55 102 51 57 100 97 100 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00615ab18  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  6 18:36:50.118: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-3114 /apis/apps/v1/namespaces/deployment-3114/replicasets/test-rollover-deployment-5686c4cfd5 99696436-b789-4f7e-bd5d-b9aaf06de59f 2067617 2 2020-05-06 18:36:32 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 78ced047-7d7e-492d-a3dc-ea297f39dad7 0xc00615ab87 0xc00615ab88}] []  [{kube-controller-manager Update apps/v1 2020-05-06 18:36:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 56 99 101 100 48 52 55 45 55 100 55 101 45 52 57 50 100 45 97 51 100 99 45 101 97 50 57 55 102 51 57 100 97 100 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00615ac18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  6 18:36:50.121: INFO: Pod "test-rollover-deployment-84f7f6f64b-lt6n8" is available:
&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-lt6n8 test-rollover-deployment-84f7f6f64b- deployment-3114 /api/v1/namespaces/deployment-3114/pods/test-rollover-deployment-84f7f6f64b-lt6n8 a7a6beb8-1507-4149-9585-9c1522d73b6b 2067635 0 2020-05-06 18:36:34 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b 4e3e39cf-1bd7-4864-ab98-f5867795efc6 0xc00615b2c7 0xc00615b2c8}] []  [{kube-controller-manager Update v1 2020-05-06 18:36:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 101 51 101 51 57 99 102 45 49 98 100 55 45 52 56 54 52 45 97 98 57 56 45 102 53 56 54 55 55 57 53 101 102 99 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:36:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 50 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j27gj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j27gj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j27gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:36:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:36:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:36:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:36:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.120,StartTime:2020-05-06 18:36:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 18:36:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://cb9275f93651abf4110ab1c8adf52e88ba490f69b3dc4cfb2d58b012ced49a90,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.120,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:36:50.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3114" for this suite.

• [SLOW TEST:25.562 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":168,"skipped":2954,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:36:50.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
May  6 18:36:50.196: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5643" to be "Succeeded or Failed"
May  6 18:36:50.214: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.914541ms
May  6 18:36:52.405: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209806154s
May  6 18:36:54.532: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336635127s
May  6 18:36:56.613: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.417667499s
May  6 18:36:58.617: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 8.421344966s
May  6 18:37:00.621: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.425595941s
STEP: Saw pod success
May  6 18:37:00.621: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
May  6 18:37:00.625: INFO: Trying to get logs from node kali-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
May  6 18:37:00.743: INFO: Waiting for pod pod-host-path-test to disappear
May  6 18:37:00.757: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:37:00.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-5643" for this suite.

• [SLOW TEST:10.636 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":169,"skipped":3030,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:37:00.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
May  6 18:37:00.836: INFO: Waiting up to 5m0s for pod "pod-9852b6c9-fa1e-4b69-ada7-d19c2e474e4b" in namespace "emptydir-9819" to be "Succeeded or Failed"
May  6 18:37:00.867: INFO: Pod "pod-9852b6c9-fa1e-4b69-ada7-d19c2e474e4b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.94465ms
May  6 18:37:02.870: INFO: Pod "pod-9852b6c9-fa1e-4b69-ada7-d19c2e474e4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034241804s
May  6 18:37:04.875: INFO: Pod "pod-9852b6c9-fa1e-4b69-ada7-d19c2e474e4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038334917s
STEP: Saw pod success
May  6 18:37:04.875: INFO: Pod "pod-9852b6c9-fa1e-4b69-ada7-d19c2e474e4b" satisfied condition "Succeeded or Failed"
May  6 18:37:04.877: INFO: Trying to get logs from node kali-worker2 pod pod-9852b6c9-fa1e-4b69-ada7-d19c2e474e4b container test-container: 
STEP: delete the pod
May  6 18:37:05.007: INFO: Waiting for pod pod-9852b6c9-fa1e-4b69-ada7-d19c2e474e4b to disappear
May  6 18:37:05.015: INFO: Pod pod-9852b6c9-fa1e-4b69-ada7-d19c2e474e4b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:37:05.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9819" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":3055,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:37:05.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
May  6 18:37:05.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9269'
May  6 18:37:09.625: INFO: stderr: ""
May  6 18:37:09.625: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
May  6 18:37:19.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9269 -o json'
May  6 18:37:19.762: INFO: stderr: ""
May  6 18:37:19.762: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-05-06T18:37:09Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-05-06T18:37:09Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:phase\": {},\n                        \"f:podIP\": {},\n                        \"f:podIPs\": {\n                            \".\": {},\n                            \"k:{\\\"ip\\\":\\\"10.244.1.122\\\"}\": {\n                                \".\": {},\n                                \"f:ip\": {}\n                            }\n                        },\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-05-06T18:37:15Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-9269\",\n        \"resourceVersion\": \"2067848\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-9269/pods/e2e-test-httpd-pod\",\n        \"uid\": \"613b0090-1924-4f38-a0d3-99f0a8689c96\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-h8d6l\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"kali-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-h8d6l\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-h8d6l\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-05-06T18:37:10Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-05-06T18:37:15Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-05-06T18:37:15Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-05-06T18:37:09Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://c06a76496038a5fca93f2fa7244fd6520c803f7a769623582d97862ec5765179\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-05-06T18:37:14Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.17.0.18\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.122\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.1.122\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-05-06T18:37:10Z\"\n    }\n}\n"
STEP: replace the image in the pod
May  6 18:37:19.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9269'
May  6 18:37:21.025: INFO: stderr: ""
May  6 18:37:21.025: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
May  6 18:37:21.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9269'
May  6 18:37:33.424: INFO: stderr: ""
May  6 18:37:33.424: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:37:33.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9269" for this suite.

• [SLOW TEST:28.436 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":275,"completed":171,"skipped":3078,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:37:33.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on node default medium
May  6 18:37:33.595: INFO: Waiting up to 5m0s for pod "pod-059f3fe3-68f6-4d59-94f6-4a5f256d12d9" in namespace "emptydir-7815" to be "Succeeded or Failed"
May  6 18:37:33.603: INFO: Pod "pod-059f3fe3-68f6-4d59-94f6-4a5f256d12d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.338177ms
May  6 18:37:35.615: INFO: Pod "pod-059f3fe3-68f6-4d59-94f6-4a5f256d12d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020193309s
May  6 18:37:37.627: INFO: Pod "pod-059f3fe3-68f6-4d59-94f6-4a5f256d12d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032801039s
May  6 18:37:39.646: INFO: Pod "pod-059f3fe3-68f6-4d59-94f6-4a5f256d12d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051762476s
STEP: Saw pod success
May  6 18:37:39.646: INFO: Pod "pod-059f3fe3-68f6-4d59-94f6-4a5f256d12d9" satisfied condition "Succeeded or Failed"
May  6 18:37:39.649: INFO: Trying to get logs from node kali-worker2 pod pod-059f3fe3-68f6-4d59-94f6-4a5f256d12d9 container test-container: 
STEP: delete the pod
May  6 18:37:39.676: INFO: Waiting for pod pod-059f3fe3-68f6-4d59-94f6-4a5f256d12d9 to disappear
May  6 18:37:39.724: INFO: Pod pod-059f3fe3-68f6-4d59-94f6-4a5f256d12d9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:37:39.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7815" for this suite.

• [SLOW TEST:6.269 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":3098,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:37:39.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:37:39.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2947'
May  6 18:37:40.044: INFO: stderr: ""
May  6 18:37:40.044: INFO: stdout: "replicationcontroller/agnhost-master created\n"
May  6 18:37:40.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2947'
May  6 18:37:40.462: INFO: stderr: ""
May  6 18:37:40.462: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
May  6 18:37:41.501: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:37:41.501: INFO: Found 0 / 1
May  6 18:37:42.466: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:37:42.467: INFO: Found 0 / 1
May  6 18:37:43.466: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:37:43.466: INFO: Found 0 / 1
May  6 18:37:44.503: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:37:44.503: INFO: Found 1 / 1
May  6 18:37:44.503: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
May  6 18:37:44.506: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:37:44.506: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May  6 18:37:44.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe pod agnhost-master-vqdbs --namespace=kubectl-2947'
May  6 18:37:44.622: INFO: stderr: ""
May  6 18:37:44.622: INFO: stdout: "Name:         agnhost-master-vqdbs\nNamespace:    kubectl-2947\nPriority:     0\nNode:         kali-worker/172.17.0.15\nStart Time:   Wed, 06 May 2020 18:37:40 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.2.110\nIPs:\n  IP:           10.244.2.110\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://2e8db5d7aa28ab41389a1f51e474de3447867526a24db466d9246e7bf25b70a9\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 06 May 2020 18:37:42 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-f28dt (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-f28dt:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-f28dt\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                  Message\n  ----    ------     ----  ----                  -------\n  Normal  Scheduled  4s    default-scheduler     Successfully assigned kubectl-2947/agnhost-master-vqdbs to kali-worker\n  Normal  Pulled     3s    kubelet, kali-worker  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    2s    kubelet, kali-worker  Created container agnhost-master\n  Normal  Started    2s    kubelet, kali-worker  Started container agnhost-master\n"
May  6 18:37:44.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-2947'
May  6 18:37:44.736: INFO: stderr: ""
May  6 18:37:44.736: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-2947\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: agnhost-master-vqdbs\n"
May  6 18:37:44.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-2947'
May  6 18:37:44.909: INFO: stderr: ""
May  6 18:37:44.909: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-2947\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.101.211.98\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.2.110:6379\nSession Affinity:  None\nEvents:            \n"
May  6 18:37:44.926: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe node kali-control-plane'
May  6 18:37:45.039: INFO: stderr: ""
May  6 18:37:45.039: INFO: stdout: "Name:               kali-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kali-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 29 Apr 2020 09:30:59 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kali-control-plane\n  AcquireTime:     \n  RenewTime:       Wed, 06 May 2020 18:37:39 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Wed, 06 May 2020 18:37:12 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Wed, 06 May 2020 18:37:12 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Wed, 06 May 2020 18:37:12 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Wed, 06 May 2020 18:37:12 +0000   Wed, 29 Apr 2020 09:31:34 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.19\n  Hostname:    kali-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 2146cf85bed648199604ab2e0e9ac609\n  System UUID:                e83c0db4-babe-44fc-9dad-b5eeae6d23fd\n  Boot ID:                    ca2aa731-f890-4956-92a1-ff8c7560d571\n  Kernel Version:             4.15.0-88-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.18.2\n  Kube-Proxy Version:         v1.18.2\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-rvq2k                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     7d9h\n  kube-system                 coredns-66bff467f8-w6zxd                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     7d9h\n  kube-system                 etcd-kali-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d9h\n  kube-system                 kindnet-65djz                                 100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      7d9h\n  kube-system                 kube-apiserver-kali-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         7d9h\n  kube-system                 kube-controller-manager-kali-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         7d9h\n  kube-system                 kube-proxy-pnhtq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d9h\n  kube-system                 kube-scheduler-kali-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         7d9h\n  local-path-storage          local-path-provisioner-bd4bb6b75-6l9ph        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7d9h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:              \n"
May  6 18:37:45.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe namespace kubectl-2947'
May  6 18:37:45.166: INFO: stderr: ""
May  6 18:37:45.166: INFO: stdout: "Name:         kubectl-2947\nLabels:       e2e-framework=kubectl\n              e2e-run=65484ced-1315-4076-be56-5961526c5d06\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:37:45.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2947" for this suite.

• [SLOW TEST:5.442 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":275,"completed":173,"skipped":3108,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:37:45.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
May  6 18:37:53.333: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May  6 18:37:53.578: INFO: Pod pod-with-prestop-exec-hook still exists
May  6 18:37:55.579: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May  6 18:37:55.583: INFO: Pod pod-with-prestop-exec-hook still exists
May  6 18:37:57.579: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May  6 18:37:57.583: INFO: Pod pod-with-prestop-exec-hook still exists
May  6 18:37:59.579: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May  6 18:37:59.583: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:37:59.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3826" for this suite.

• [SLOW TEST:14.426 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":3135,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:37:59.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
May  6 18:37:59.677: INFO: Waiting up to 5m0s for pod "pod-11c5a4f6-220d-4a3a-a073-eeb84b784fbd" in namespace "emptydir-1407" to be "Succeeded or Failed"
May  6 18:37:59.711: INFO: Pod "pod-11c5a4f6-220d-4a3a-a073-eeb84b784fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 34.292812ms
May  6 18:38:01.715: INFO: Pod "pod-11c5a4f6-220d-4a3a-a073-eeb84b784fbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037997322s
May  6 18:38:03.940: INFO: Pod "pod-11c5a4f6-220d-4a3a-a073-eeb84b784fbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.263084942s
STEP: Saw pod success
May  6 18:38:03.940: INFO: Pod "pod-11c5a4f6-220d-4a3a-a073-eeb84b784fbd" satisfied condition "Succeeded or Failed"
May  6 18:38:04.035: INFO: Trying to get logs from node kali-worker2 pod pod-11c5a4f6-220d-4a3a-a073-eeb84b784fbd container test-container: 
STEP: delete the pod
May  6 18:38:04.131: INFO: Waiting for pod pod-11c5a4f6-220d-4a3a-a073-eeb84b784fbd to disappear
May  6 18:38:04.148: INFO: Pod pod-11c5a4f6-220d-4a3a-a073-eeb84b784fbd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:38:04.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1407" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":3165,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:38:04.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:38:04.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config version'
May  6 18:38:05.138: INFO: stderr: ""
May  6 18:38:05.138: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T17:28:31Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:38:05.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6860" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":275,"completed":176,"skipped":3186,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:38:05.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:38:05.580: INFO: Pod name cleanup-pod: Found 0 pods out of 1
May  6 18:38:10.585: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
May  6 18:38:12.592: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May  6 18:38:12.736: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-14 /apis/apps/v1/namespaces/deployment-14/deployments/test-cleanup-deployment c3891202-6216-451d-9b19-fe33ef786c19 2068190 1 2020-05-06 18:38:12 +0000 UTC   map[name:cleanup-pod] map[] [] []  [{e2e.test Update apps/v1 2020-05-06 18:38:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc006223028  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

May  6 18:38:12.999: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f  deployment-14 /apis/apps/v1/namespaces/deployment-14/replicasets/test-cleanup-deployment-b4867b47f 22ba1752-2f99-4471-92e5-36e9376fdab3 2068192 1 2020-05-06 18:38:12 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment c3891202-6216-451d-9b19-fe33ef786c19 0xc005fb4840 0xc005fb4841}] []  [{kube-controller-manager Update apps/v1 2020-05-06 18:38:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 51 56 57 49 50 48 50 45 54 50 49 54 45 52 53 49 100 45 57 98 49 57 45 102 101 51 51 101 102 55 56 54 99 49 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005fb48b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  6 18:38:12.999: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
May  6 18:38:12.999: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-14 /apis/apps/v1/namespaces/deployment-14/replicasets/test-cleanup-controller af428de2-60c2-4bbe-941c-39532299c74c 2068191 1 2020-05-06 18:38:05 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment c3891202-6216-451d-9b19-fe33ef786c19 0xc005fb4737 0xc005fb4738}] []  [{e2e.test Update apps/v1 2020-05-06 18:38:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-06 18:38:12 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 51 56 57 49 50 48 50 45 54 50 49 54 45 52 53 49 100 45 57 98 49 57 45 102 101 51 51 101 102 55 56 54 99 49 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005fb47d8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
May  6 18:38:13.120: INFO: Pod "test-cleanup-controller-vmlgs" is available:
&Pod{ObjectMeta:{test-cleanup-controller-vmlgs test-cleanup-controller- deployment-14 /api/v1/namespaces/deployment-14/pods/test-cleanup-controller-vmlgs d9ab3fb5-b724-4ebd-84d7-7e69f72c2955 2068183 0 2020-05-06 18:38:05 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller af428de2-60c2-4bbe-941c-39532299c74c 0xc006223517 0xc006223518}] []  [{kube-controller-manager Update v1 2020-05-06 18:38:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 102 52 50 56 100 101 50 45 54 48 99 50 45 52 98 98 101 45 57 52 49 99 45 51 57 53 51 50 50 57 57 99 55 52 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-06 18:38:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 50 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrbcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrbcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrbcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:38:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:38:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:38:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:38:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.127,StartTime:2020-05-06 18:38:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 18:38:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://51adaea9b4a11dbd0bce69070d77498347345d3e37b0c4be63aaa4b36bfb623e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.127,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 18:38:13.120: INFO: Pod "test-cleanup-deployment-b4867b47f-hrjml" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-hrjml test-cleanup-deployment-b4867b47f- deployment-14 /api/v1/namespaces/deployment-14/pods/test-cleanup-deployment-b4867b47f-hrjml 2b920aa8-2d87-4f1c-aee8-689ac0937635 2068197 0 2020-05-06 18:38:12 +0000 UTC   map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f 22ba1752-2f99-4471-92e5-36e9376fdab3 0xc0062236d0 0xc0062236d1}] []  [{kube-controller-manager Update v1 2020-05-06 18:38:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 50 98 97 49 55 53 50 45 50 102 57 57 45 52 52 55 49 45 57 50 101 53 45 51 54 101 57 51 55 54 102 100 97 98 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrbcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrbcm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrbcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 18:38:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:38:13.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-14" for this suite.

• [SLOW TEST:7.835 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":177,"skipped":3196,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:38:13.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-6c5f1b5a-4171-4fb6-82a6-0131f9632c0c
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:38:22.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4474" for this suite.

• [SLOW TEST:9.132 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":3198,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:38:22.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-65f983c7-110a-4f90-a919-1fed62ff98a9 in namespace container-probe-2163
May  6 18:38:26.358: INFO: Started pod liveness-65f983c7-110a-4f90-a919-1fed62ff98a9 in namespace container-probe-2163
STEP: checking the pod's current state and verifying that restartCount is present
May  6 18:38:26.361: INFO: Initial restart count of pod liveness-65f983c7-110a-4f90-a919-1fed62ff98a9 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:42:27.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2163" for this suite.

• [SLOW TEST:245.443 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":3223,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:42:27.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:42:45.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6132" for this suite.

• [SLOW TEST:18.085 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":180,"skipped":3228,"failed":0}
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:42:45.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test env composition
May  6 18:42:46.136: INFO: Waiting up to 5m0s for pod "var-expansion-1fc838e1-59af-41a4-bf5c-58aab0c12db1" in namespace "var-expansion-1498" to be "Succeeded or Failed"
May  6 18:42:46.404: INFO: Pod "var-expansion-1fc838e1-59af-41a4-bf5c-58aab0c12db1": Phase="Pending", Reason="", readiness=false. Elapsed: 267.793297ms
May  6 18:42:48.409: INFO: Pod "var-expansion-1fc838e1-59af-41a4-bf5c-58aab0c12db1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.272237016s
May  6 18:42:50.413: INFO: Pod "var-expansion-1fc838e1-59af-41a4-bf5c-58aab0c12db1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.276233097s
May  6 18:42:52.432: INFO: Pod "var-expansion-1fc838e1-59af-41a4-bf5c-58aab0c12db1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.295405879s
May  6 18:42:54.435: INFO: Pod "var-expansion-1fc838e1-59af-41a4-bf5c-58aab0c12db1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.299056056s
STEP: Saw pod success
May  6 18:42:54.435: INFO: Pod "var-expansion-1fc838e1-59af-41a4-bf5c-58aab0c12db1" satisfied condition "Succeeded or Failed"
May  6 18:42:54.438: INFO: Trying to get logs from node kali-worker2 pod var-expansion-1fc838e1-59af-41a4-bf5c-58aab0c12db1 container dapi-container: 
STEP: delete the pod
May  6 18:42:54.814: INFO: Waiting for pod var-expansion-1fc838e1-59af-41a4-bf5c-58aab0c12db1 to disappear
May  6 18:42:54.827: INFO: Pod var-expansion-1fc838e1-59af-41a4-bf5c-58aab0c12db1 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:42:54.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1498" for this suite.

• [SLOW TEST:9.030 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3230,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:42:54.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-dca09a1d-d432-4df5-bc62-b6bb80c92718
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-dca09a1d-d432-4df5-bc62-b6bb80c92718
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:43:01.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9012" for this suite.

• [SLOW TEST:6.538 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3243,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:43:01.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-42cc2331-aa04-467e-b1b8-0fccfcb000e8 in namespace container-probe-3393
May  6 18:43:05.504: INFO: Started pod liveness-42cc2331-aa04-467e-b1b8-0fccfcb000e8 in namespace container-probe-3393
STEP: checking the pod's current state and verifying that restartCount is present
May  6 18:43:05.507: INFO: Initial restart count of pod liveness-42cc2331-aa04-467e-b1b8-0fccfcb000e8 is 0
May  6 18:43:27.575: INFO: Restart count of pod container-probe-3393/liveness-42cc2331-aa04-467e-b1b8-0fccfcb000e8 is now 1 (22.067891461s elapsed)
May  6 18:43:45.633: INFO: Restart count of pod container-probe-3393/liveness-42cc2331-aa04-467e-b1b8-0fccfcb000e8 is now 2 (40.125982023s elapsed)
May  6 18:44:08.206: INFO: Restart count of pod container-probe-3393/liveness-42cc2331-aa04-467e-b1b8-0fccfcb000e8 is now 3 (1m2.698252213s elapsed)
May  6 18:44:28.247: INFO: Restart count of pod container-probe-3393/liveness-42cc2331-aa04-467e-b1b8-0fccfcb000e8 is now 4 (1m22.740060837s elapsed)
May  6 18:45:31.427: INFO: Restart count of pod container-probe-3393/liveness-42cc2331-aa04-467e-b1b8-0fccfcb000e8 is now 5 (2m25.919673458s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:45:31.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3393" for this suite.

• [SLOW TEST:150.504 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3266,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:45:31.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
May  6 18:45:32.204: INFO: namespace kubectl-8645
May  6 18:45:32.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8645'
May  6 18:45:33.000: INFO: stderr: ""
May  6 18:45:33.000: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
May  6 18:45:34.004: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:45:34.004: INFO: Found 0 / 1
May  6 18:45:35.004: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:45:35.004: INFO: Found 0 / 1
May  6 18:45:36.161: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:45:36.161: INFO: Found 0 / 1
May  6 18:45:37.054: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:45:37.054: INFO: Found 0 / 1
May  6 18:45:38.383: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:45:38.383: INFO: Found 0 / 1
May  6 18:45:39.348: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:45:39.348: INFO: Found 1 / 1
May  6 18:45:39.348: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
May  6 18:45:39.378: INFO: Selector matched 1 pods for map[app:agnhost]
May  6 18:45:39.378: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May  6 18:45:39.378: INFO: wait on agnhost-master startup in kubectl-8645 
May  6 18:45:39.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs agnhost-master-mdn7k agnhost-master --namespace=kubectl-8645'
May  6 18:45:39.494: INFO: stderr: ""
May  6 18:45:39.494: INFO: stdout: "Paused\n"
STEP: exposing RC
May  6 18:45:39.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8645'
May  6 18:45:40.278: INFO: stderr: ""
May  6 18:45:40.278: INFO: stdout: "service/rm2 exposed\n"
May  6 18:45:40.639: INFO: Service rm2 in namespace kubectl-8645 found.
STEP: exposing service
May  6 18:45:42.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8645'
May  6 18:45:42.815: INFO: stderr: ""
May  6 18:45:42.816: INFO: stdout: "service/rm3 exposed\n"
May  6 18:45:42.915: INFO: Service rm3 in namespace kubectl-8645 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:45:44.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8645" for this suite.

• [SLOW TEST:13.049 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":275,"completed":184,"skipped":3287,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:45:44.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:46:01.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7942" for this suite.
STEP: Destroying namespace "nsdeletetest-3935" for this suite.
May  6 18:46:01.423: INFO: Namespace nsdeletetest-3935 was already deleted
STEP: Destroying namespace "nsdeletetest-4588" for this suite.

• [SLOW TEST:16.501 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":185,"skipped":3304,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:46:01.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:46:19.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8260" for this suite.

• [SLOW TEST:17.833 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":186,"skipped":3313,"failed":0}
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:46:19.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-5f6q
STEP: Creating a pod to test atomic-volume-subpath
May  6 18:46:19.340: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5f6q" in namespace "subpath-9004" to be "Succeeded or Failed"
May  6 18:46:19.343: INFO: Pod "pod-subpath-test-configmap-5f6q": Phase="Pending", Reason="", readiness=false. Elapsed: 3.52142ms
May  6 18:46:21.449: INFO: Pod "pod-subpath-test-configmap-5f6q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108796847s
May  6 18:46:23.453: INFO: Pod "pod-subpath-test-configmap-5f6q": Phase="Running", Reason="", readiness=true. Elapsed: 4.112816946s
May  6 18:46:25.457: INFO: Pod "pod-subpath-test-configmap-5f6q": Phase="Running", Reason="", readiness=true. Elapsed: 6.117452895s
May  6 18:46:27.461: INFO: Pod "pod-subpath-test-configmap-5f6q": Phase="Running", Reason="", readiness=true. Elapsed: 8.121363562s
May  6 18:46:29.466: INFO: Pod "pod-subpath-test-configmap-5f6q": Phase="Running", Reason="", readiness=true. Elapsed: 10.126220707s
May  6 18:46:31.470: INFO: Pod "pod-subpath-test-configmap-5f6q": Phase="Running", Reason="", readiness=true. Elapsed: 12.130358395s
May  6 18:46:33.475: INFO: Pod "pod-subpath-test-configmap-5f6q": Phase="Running", Reason="", readiness=true. Elapsed: 14.135024369s
May  6 18:46:35.479: INFO: Pod "pod-subpath-test-configmap-5f6q": Phase="Running", Reason="", readiness=true. Elapsed: 16.138860979s
May  6 18:46:37.482: INFO: Pod "pod-subpath-test-configmap-5f6q": Phase="Running", Reason="", readiness=true. Elapsed: 18.142246827s
May  6 18:46:39.485: INFO: Pod "pod-subpath-test-configmap-5f6q": Phase="Running", Reason="", readiness=true. Elapsed: 20.145405401s
May  6 18:46:41.514: INFO: Pod "pod-subpath-test-configmap-5f6q": Phase="Running", Reason="", readiness=true. Elapsed: 22.174508728s
May  6 18:46:43.519: INFO: Pod "pod-subpath-test-configmap-5f6q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.179535089s
STEP: Saw pod success
May  6 18:46:43.519: INFO: Pod "pod-subpath-test-configmap-5f6q" satisfied condition "Succeeded or Failed"
May  6 18:46:43.522: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-5f6q container test-container-subpath-configmap-5f6q: 
STEP: delete the pod
May  6 18:46:43.645: INFO: Waiting for pod pod-subpath-test-configmap-5f6q to disappear
May  6 18:46:43.686: INFO: Pod pod-subpath-test-configmap-5f6q no longer exists
STEP: Deleting pod pod-subpath-test-configmap-5f6q
May  6 18:46:43.686: INFO: Deleting pod "pod-subpath-test-configmap-5f6q" in namespace "subpath-9004"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:46:43.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9004" for this suite.

• [SLOW TEST:24.438 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":187,"skipped":3314,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:46:43.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
May  6 18:46:43.886: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4820 /api/v1/namespaces/watch-4820/configmaps/e2e-watch-test-resource-version be0aa68b-18a1-41da-b0bc-893b40342c04 2070027 0 2020-05-06 18:46:43 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-06 18:46:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May  6 18:46:43.887: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4820 /api/v1/namespaces/watch-4820/configmaps/e2e-watch-test-resource-version be0aa68b-18a1-41da-b0bc-893b40342c04 2070028 0 2020-05-06 18:46:43 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-06 18:46:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:46:43.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4820" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":188,"skipped":3322,"failed":0}
SS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:46:43.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-0a9dfe63-aed9-472e-96e3-d84737548a85
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:46:43.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3594" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":189,"skipped":3324,"failed":0}

------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:46:43.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3986.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3986.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3986.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3986.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3986.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3986.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  6 18:46:50.288: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:46:50.291: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:46:50.295: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:46:50.297: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:46:50.419: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:46:50.423: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:46:50.425: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:46:50.428: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:46:50.435: INFO: Lookups using dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3986.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3986.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local jessie_udp@dns-test-service-2.dns-3986.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3986.svc.cluster.local]

May  6 18:46:55.440: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:46:55.443: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:46:55.447: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:46:55.451: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:46:55.460: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:46:55.463: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:46:55.466: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:46:55.469: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:46:55.476: INFO: Lookups using dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3986.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3986.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local jessie_udp@dns-test-service-2.dns-3986.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3986.svc.cluster.local]

May  6 18:47:00.439: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:00.443: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:00.447: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:00.479: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:00.486: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:00.489: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:00.491: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:00.494: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:00.500: INFO: Lookups using dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3986.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3986.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local jessie_udp@dns-test-service-2.dns-3986.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3986.svc.cluster.local]

May  6 18:47:05.440: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:05.444: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:05.448: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:05.456: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:05.465: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:05.467: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:05.469: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:05.472: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:05.477: INFO: Lookups using dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3986.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3986.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local jessie_udp@dns-test-service-2.dns-3986.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3986.svc.cluster.local]

May  6 18:47:10.439: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:10.443: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:10.447: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:10.450: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:10.460: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:10.463: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:10.465: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:10.468: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:10.474: INFO: Lookups using dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3986.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3986.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local jessie_udp@dns-test-service-2.dns-3986.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3986.svc.cluster.local]

May  6 18:47:15.440: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:15.444: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:15.448: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:15.450: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:15.459: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:15.462: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:15.464: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:15.467: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94: the server could not find the requested resource (get pods dns-test-66a220bf-df37-4341-a1e1-032c876b3a94)
May  6 18:47:15.472: INFO: Lookups using dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3986.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3986.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3986.svc.cluster.local jessie_udp@dns-test-service-2.dns-3986.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3986.svc.cluster.local]

May  6 18:47:20.503: INFO: DNS probes using dns-3986/dns-test-66a220bf-df37-4341-a1e1-032c876b3a94 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:47:20.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3986" for this suite.

• [SLOW TEST:37.148 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":190,"skipped":3324,"failed":0}
SSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:47:21.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:47:39.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-551" for this suite.

• [SLOW TEST:18.192 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":191,"skipped":3328,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:47:39.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May  6 18:47:39.355: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May  6 18:47:39.422: INFO: Waiting for terminating namespaces to be deleted...
May  6 18:47:39.424: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May  6 18:47:39.477: INFO: fail-once-local-tp8qn from job-551 started at 2020-05-06 18:47:29 +0000 UTC (1 container statuses recorded)
May  6 18:47:39.477: INFO: 	Container c ready: false, restart count 1
May  6 18:47:39.477: INFO: fail-once-local-dqw6n from job-551 started at 2020-05-06 18:47:21 +0000 UTC (1 container statuses recorded)
May  6 18:47:39.477: INFO: 	Container c ready: false, restart count 1
May  6 18:47:39.477: INFO: fail-once-local-dktgt from job-551 started at 2020-05-06 18:47:21 +0000 UTC (1 container statuses recorded)
May  6 18:47:39.477: INFO: 	Container c ready: false, restart count 1
May  6 18:47:39.477: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  6 18:47:39.477: INFO: 	Container kube-proxy ready: true, restart count 0
May  6 18:47:39.477: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  6 18:47:39.477: INFO: 	Container kindnet-cni ready: true, restart count 1
May  6 18:47:39.477: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May  6 18:47:39.494: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  6 18:47:39.494: INFO: 	Container kube-proxy ready: true, restart count 0
May  6 18:47:39.494: INFO: fail-once-local-n97fw from job-551 started at 2020-05-06 18:47:28 +0000 UTC (1 container statuses recorded)
May  6 18:47:39.494: INFO: 	Container c ready: false, restart count 1
May  6 18:47:39.494: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  6 18:47:39.494: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-435d2c6e-be70-467f-92b1-0501f79b911e 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-435d2c6e-be70-467f-92b1-0501f79b911e off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-435d2c6e-be70-467f-92b1-0501f79b911e
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:48:02.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5828" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:23.034 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":192,"skipped":3329,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:48:02.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  6 18:48:03.183: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
May  6 18:48:05.193: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387683, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387683, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387683, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387683, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:48:07.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387683, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387683, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387683, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387683, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  6 18:48:10.320: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:48:10.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5925" for this suite.
STEP: Destroying namespace "webhook-5925-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.605 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":193,"skipped":3378,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:48:10.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  6 18:48:11.075: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3fed2ac9-dbfb-4ef4-bb4f-2e3e7bb37cca" in namespace "projected-7767" to be "Succeeded or Failed"
May  6 18:48:11.289: INFO: Pod "downwardapi-volume-3fed2ac9-dbfb-4ef4-bb4f-2e3e7bb37cca": Phase="Pending", Reason="", readiness=false. Elapsed: 213.679066ms
May  6 18:48:13.293: INFO: Pod "downwardapi-volume-3fed2ac9-dbfb-4ef4-bb4f-2e3e7bb37cca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217793105s
May  6 18:48:15.350: INFO: Pod "downwardapi-volume-3fed2ac9-dbfb-4ef4-bb4f-2e3e7bb37cca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.274394001s
STEP: Saw pod success
May  6 18:48:15.350: INFO: Pod "downwardapi-volume-3fed2ac9-dbfb-4ef4-bb4f-2e3e7bb37cca" satisfied condition "Succeeded or Failed"
May  6 18:48:15.366: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-3fed2ac9-dbfb-4ef4-bb4f-2e3e7bb37cca container client-container: 
STEP: delete the pod
May  6 18:48:15.433: INFO: Waiting for pod downwardapi-volume-3fed2ac9-dbfb-4ef4-bb4f-2e3e7bb37cca to disappear
May  6 18:48:15.437: INFO: Pod downwardapi-volume-3fed2ac9-dbfb-4ef4-bb4f-2e3e7bb37cca no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:48:15.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7767" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":194,"skipped":3381,"failed":0}
SSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:48:15.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-6a4a5ebf-7ea4-45e6-9d1d-89d04f3faacb in namespace container-probe-8471
May  6 18:48:19.677: INFO: Started pod busybox-6a4a5ebf-7ea4-45e6-9d1d-89d04f3faacb in namespace container-probe-8471
STEP: checking the pod's current state and verifying that restartCount is present
May  6 18:48:19.680: INFO: Initial restart count of pod busybox-6a4a5ebf-7ea4-45e6-9d1d-89d04f3faacb is 0
May  6 18:49:14.775: INFO: Restart count of pod container-probe-8471/busybox-6a4a5ebf-7ea4-45e6-9d1d-89d04f3faacb is now 1 (55.094388468s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:49:14.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8471" for this suite.

• [SLOW TEST:59.446 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3384,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:49:14.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-40356130-c541-48e6-9b69-87d311bf5d54
STEP: Creating a pod to test consume secrets
May  6 18:49:15.100: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-04e01ba8-48b8-4c4a-9b6b-81d7dc6ed110" in namespace "projected-6226" to be "Succeeded or Failed"
May  6 18:49:15.116: INFO: Pod "pod-projected-secrets-04e01ba8-48b8-4c4a-9b6b-81d7dc6ed110": Phase="Pending", Reason="", readiness=false. Elapsed: 15.705131ms
May  6 18:49:17.313: INFO: Pod "pod-projected-secrets-04e01ba8-48b8-4c4a-9b6b-81d7dc6ed110": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212506166s
May  6 18:49:19.445: INFO: Pod "pod-projected-secrets-04e01ba8-48b8-4c4a-9b6b-81d7dc6ed110": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344269347s
May  6 18:49:21.463: INFO: Pod "pod-projected-secrets-04e01ba8-48b8-4c4a-9b6b-81d7dc6ed110": Phase="Running", Reason="", readiness=true. Elapsed: 6.362379218s
May  6 18:49:23.602: INFO: Pod "pod-projected-secrets-04e01ba8-48b8-4c4a-9b6b-81d7dc6ed110": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.501312054s
STEP: Saw pod success
May  6 18:49:23.602: INFO: Pod "pod-projected-secrets-04e01ba8-48b8-4c4a-9b6b-81d7dc6ed110" satisfied condition "Succeeded or Failed"
May  6 18:49:23.605: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-04e01ba8-48b8-4c4a-9b6b-81d7dc6ed110 container projected-secret-volume-test: 
STEP: delete the pod
May  6 18:49:24.380: INFO: Waiting for pod pod-projected-secrets-04e01ba8-48b8-4c4a-9b6b-81d7dc6ed110 to disappear
May  6 18:49:24.403: INFO: Pod pod-projected-secrets-04e01ba8-48b8-4c4a-9b6b-81d7dc6ed110 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:49:24.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6226" for this suite.

• [SLOW TEST:9.521 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3390,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:49:24.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
May  6 18:49:25.431: INFO: Waiting up to 5m0s for pod "pod-8b64dc6b-9747-4d82-a8d9-0fb90d278f96" in namespace "emptydir-7202" to be "Succeeded or Failed"
May  6 18:49:25.482: INFO: Pod "pod-8b64dc6b-9747-4d82-a8d9-0fb90d278f96": Phase="Pending", Reason="", readiness=false. Elapsed: 50.86023ms
May  6 18:49:27.486: INFO: Pod "pod-8b64dc6b-9747-4d82-a8d9-0fb90d278f96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054765993s
May  6 18:49:29.490: INFO: Pod "pod-8b64dc6b-9747-4d82-a8d9-0fb90d278f96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058266067s
May  6 18:49:31.494: INFO: Pod "pod-8b64dc6b-9747-4d82-a8d9-0fb90d278f96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.062517119s
STEP: Saw pod success
May  6 18:49:31.494: INFO: Pod "pod-8b64dc6b-9747-4d82-a8d9-0fb90d278f96" satisfied condition "Succeeded or Failed"
May  6 18:49:31.498: INFO: Trying to get logs from node kali-worker2 pod pod-8b64dc6b-9747-4d82-a8d9-0fb90d278f96 container test-container: 
STEP: delete the pod
May  6 18:49:31.530: INFO: Waiting for pod pod-8b64dc6b-9747-4d82-a8d9-0fb90d278f96 to disappear
May  6 18:49:31.541: INFO: Pod pod-8b64dc6b-9747-4d82-a8d9-0fb90d278f96 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:49:31.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7202" for this suite.

• [SLOW TEST:7.138 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3401,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:49:31.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating replication controller my-hostname-basic-d876e546-a501-4245-b1ef-c35a03f231d2
May  6 18:49:31.641: INFO: Pod name my-hostname-basic-d876e546-a501-4245-b1ef-c35a03f231d2: Found 0 pods out of 1
May  6 18:49:36.651: INFO: Pod name my-hostname-basic-d876e546-a501-4245-b1ef-c35a03f231d2: Found 1 pods out of 1
May  6 18:49:36.651: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d876e546-a501-4245-b1ef-c35a03f231d2" are running
May  6 18:49:36.653: INFO: Pod "my-hostname-basic-d876e546-a501-4245-b1ef-c35a03f231d2-zmn7p" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 18:49:31 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 18:49:34 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 18:49:34 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 18:49:31 +0000 UTC Reason: Message:}])
May  6 18:49:36.653: INFO: Trying to dial the pod
May  6 18:49:41.666: INFO: Controller my-hostname-basic-d876e546-a501-4245-b1ef-c35a03f231d2: Got expected result from replica 1 [my-hostname-basic-d876e546-a501-4245-b1ef-c35a03f231d2-zmn7p]: "my-hostname-basic-d876e546-a501-4245-b1ef-c35a03f231d2-zmn7p", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:49:41.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7929" for this suite.

• [SLOW TEST:10.124 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":198,"skipped":3413,"failed":0}
SS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:49:41.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:49:50.052: INFO: Waiting up to 5m0s for pod "client-envvars-e1c518ce-8e79-48cd-871f-8a026df4615d" in namespace "pods-3478" to be "Succeeded or Failed"
May  6 18:49:50.122: INFO: Pod "client-envvars-e1c518ce-8e79-48cd-871f-8a026df4615d": Phase="Pending", Reason="", readiness=false. Elapsed: 70.211424ms
May  6 18:49:52.127: INFO: Pod "client-envvars-e1c518ce-8e79-48cd-871f-8a026df4615d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074897997s
May  6 18:49:54.132: INFO: Pod "client-envvars-e1c518ce-8e79-48cd-871f-8a026df4615d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079406181s
STEP: Saw pod success
May  6 18:49:54.132: INFO: Pod "client-envvars-e1c518ce-8e79-48cd-871f-8a026df4615d" satisfied condition "Succeeded or Failed"
May  6 18:49:54.135: INFO: Trying to get logs from node kali-worker2 pod client-envvars-e1c518ce-8e79-48cd-871f-8a026df4615d container env3cont: 
STEP: delete the pod
May  6 18:49:54.169: INFO: Waiting for pod client-envvars-e1c518ce-8e79-48cd-871f-8a026df4615d to disappear
May  6 18:49:54.178: INFO: Pod client-envvars-e1c518ce-8e79-48cd-871f-8a026df4615d no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:49:54.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3478" for this suite.

• [SLOW TEST:12.514 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3415,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:49:54.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:49:54.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
May  6 18:49:57.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5663 create -f -'
May  6 18:50:01.750: INFO: stderr: ""
May  6 18:50:01.750: INFO: stdout: "e2e-test-crd-publish-openapi-2704-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May  6 18:50:01.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5663 delete e2e-test-crd-publish-openapi-2704-crds test-foo'
May  6 18:50:01.858: INFO: stderr: ""
May  6 18:50:01.858: INFO: stdout: "e2e-test-crd-publish-openapi-2704-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
May  6 18:50:01.858: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5663 apply -f -'
May  6 18:50:02.179: INFO: stderr: ""
May  6 18:50:02.179: INFO: stdout: "e2e-test-crd-publish-openapi-2704-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May  6 18:50:02.179: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5663 delete e2e-test-crd-publish-openapi-2704-crds test-foo'
May  6 18:50:02.308: INFO: stderr: ""
May  6 18:50:02.308: INFO: stdout: "e2e-test-crd-publish-openapi-2704-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
May  6 18:50:02.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5663 create -f -'
May  6 18:50:02.554: INFO: rc: 1
May  6 18:50:02.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5663 apply -f -'
May  6 18:50:02.824: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
May  6 18:50:02.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5663 create -f -'
May  6 18:50:03.080: INFO: rc: 1
May  6 18:50:03.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5663 apply -f -'
May  6 18:50:03.319: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
May  6 18:50:03.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2704-crds'
May  6 18:50:03.584: INFO: stderr: ""
May  6 18:50:03.584: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2704-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
May  6 18:50:03.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2704-crds.metadata'
May  6 18:50:03.812: INFO: stderr: ""
May  6 18:50:03.812: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2704-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
May  6 18:50:03.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2704-crds.spec'
May  6 18:50:04.066: INFO: stderr: ""
May  6 18:50:04.066: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2704-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
May  6 18:50:04.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2704-crds.spec.bars'
May  6 18:50:04.302: INFO: stderr: ""
May  6 18:50:04.302: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2704-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
May  6 18:50:04.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2704-crds.spec.bars2'
May  6 18:50:04.528: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:50:07.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5663" for this suite.

• [SLOW TEST:13.416 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":200,"skipped":3417,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:50:07.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
May  6 18:50:08.534: INFO: >>> kubeConfig: /root/.kube/config
May  6 18:50:11.504: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:50:22.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9022" for this suite.

• [SLOW TEST:14.594 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":201,"skipped":3477,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:50:22.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May  6 18:50:22.576: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:50:31.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5002" for this suite.

• [SLOW TEST:9.634 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":202,"skipped":3486,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:50:31.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May  6 18:50:32.340: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:50:43.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1626" for this suite.

• [SLOW TEST:11.652 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":203,"skipped":3499,"failed":0}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:50:43.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-e3f04877-8dd4-4d1a-82e1-35808b4a1db2
STEP: Creating a pod to test consume configMaps
May  6 18:50:44.367: INFO: Waiting up to 5m0s for pod "pod-configmaps-dd65e09e-3dc1-419b-92d8-4c93e92e9f77" in namespace "configmap-5937" to be "Succeeded or Failed"
May  6 18:50:44.469: INFO: Pod "pod-configmaps-dd65e09e-3dc1-419b-92d8-4c93e92e9f77": Phase="Pending", Reason="", readiness=false. Elapsed: 102.093367ms
May  6 18:50:46.517: INFO: Pod "pod-configmaps-dd65e09e-3dc1-419b-92d8-4c93e92e9f77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150140042s
May  6 18:50:48.535: INFO: Pod "pod-configmaps-dd65e09e-3dc1-419b-92d8-4c93e92e9f77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168404271s
May  6 18:50:50.539: INFO: Pod "pod-configmaps-dd65e09e-3dc1-419b-92d8-4c93e92e9f77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.172046748s
STEP: Saw pod success
May  6 18:50:50.539: INFO: Pod "pod-configmaps-dd65e09e-3dc1-419b-92d8-4c93e92e9f77" satisfied condition "Succeeded or Failed"
May  6 18:50:50.541: INFO: Trying to get logs from node kali-worker pod pod-configmaps-dd65e09e-3dc1-419b-92d8-4c93e92e9f77 container configmap-volume-test: 
STEP: delete the pod
May  6 18:50:50.570: INFO: Waiting for pod pod-configmaps-dd65e09e-3dc1-419b-92d8-4c93e92e9f77 to disappear
May  6 18:50:50.574: INFO: Pod pod-configmaps-dd65e09e-3dc1-419b-92d8-4c93e92e9f77 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:50:50.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5937" for this suite.

• [SLOW TEST:7.094 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":204,"skipped":3503,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:50:50.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  6 18:50:51.942: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  6 18:50:53.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387851, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387851, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387852, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387851, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:50:55.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387851, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387851, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387852, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387851, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  6 18:50:59.152: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
May  6 18:50:59.175: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:50:59.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9758" for this suite.
STEP: Destroying namespace "webhook-9758-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.660 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":205,"skipped":3530,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:50:59.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:50:59.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-4591" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":206,"skipped":3552,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:50:59.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-8746
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 18:50:59.391: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 18:50:59.463: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 18:51:01.518: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 18:51:03.467: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 18:51:05.468: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:51:07.469: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:51:09.505: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:51:11.468: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:51:13.468: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:51:15.487: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:51:17.466: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:51:19.467: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 18:51:19.471: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 18:51:23.525: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.146:8080/dial?request=hostname&protocol=udp&host=10.244.2.129&port=8081&tries=1'] Namespace:pod-network-test-8746 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 18:51:23.525: INFO: >>> kubeConfig: /root/.kube/config
I0506 18:51:23.563602       7 log.go:172] (0xc002950000) (0xc001a221e0) Create stream
I0506 18:51:23.563643       7 log.go:172] (0xc002950000) (0xc001a221e0) Stream added, broadcasting: 1
I0506 18:51:23.565791       7 log.go:172] (0xc002950000) Reply frame received for 1
I0506 18:51:23.565833       7 log.go:172] (0xc002950000) (0xc001a22280) Create stream
I0506 18:51:23.565853       7 log.go:172] (0xc002950000) (0xc001a22280) Stream added, broadcasting: 3
I0506 18:51:23.566842       7 log.go:172] (0xc002950000) Reply frame received for 3
I0506 18:51:23.566876       7 log.go:172] (0xc002950000) (0xc001b58aa0) Create stream
I0506 18:51:23.566889       7 log.go:172] (0xc002950000) (0xc001b58aa0) Stream added, broadcasting: 5
I0506 18:51:23.567894       7 log.go:172] (0xc002950000) Reply frame received for 5
I0506 18:51:23.634162       7 log.go:172] (0xc002950000) Data frame received for 3
I0506 18:51:23.634199       7 log.go:172] (0xc001a22280) (3) Data frame handling
I0506 18:51:23.634222       7 log.go:172] (0xc001a22280) (3) Data frame sent
I0506 18:51:23.634597       7 log.go:172] (0xc002950000) Data frame received for 3
I0506 18:51:23.634619       7 log.go:172] (0xc001a22280) (3) Data frame handling
I0506 18:51:23.634896       7 log.go:172] (0xc002950000) Data frame received for 5
I0506 18:51:23.634912       7 log.go:172] (0xc001b58aa0) (5) Data frame handling
I0506 18:51:23.636548       7 log.go:172] (0xc002950000) Data frame received for 1
I0506 18:51:23.636579       7 log.go:172] (0xc001a221e0) (1) Data frame handling
I0506 18:51:23.636602       7 log.go:172] (0xc001a221e0) (1) Data frame sent
I0506 18:51:23.636623       7 log.go:172] (0xc002950000) (0xc001a221e0) Stream removed, broadcasting: 1
I0506 18:51:23.636648       7 log.go:172] (0xc002950000) Go away received
I0506 18:51:23.636760       7 log.go:172] (0xc002950000) (0xc001a221e0) Stream removed, broadcasting: 1
I0506 18:51:23.636778       7 log.go:172] (0xc002950000) (0xc001a22280) Stream removed, broadcasting: 3
I0506 18:51:23.636785       7 log.go:172] (0xc002950000) (0xc001b58aa0) Stream removed, broadcasting: 5
May  6 18:51:23.636: INFO: Waiting for responses: map[]
May  6 18:51:23.639: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.146:8080/dial?request=hostname&protocol=udp&host=10.244.1.145&port=8081&tries=1'] Namespace:pod-network-test-8746 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 18:51:23.639: INFO: >>> kubeConfig: /root/.kube/config
I0506 18:51:23.664234       7 log.go:172] (0xc005c32420) (0xc001a22640) Create stream
I0506 18:51:23.664266       7 log.go:172] (0xc005c32420) (0xc001a22640) Stream added, broadcasting: 1
I0506 18:51:23.666553       7 log.go:172] (0xc005c32420) Reply frame received for 1
I0506 18:51:23.666595       7 log.go:172] (0xc005c32420) (0xc001216000) Create stream
I0506 18:51:23.666610       7 log.go:172] (0xc005c32420) (0xc001216000) Stream added, broadcasting: 3
I0506 18:51:23.667524       7 log.go:172] (0xc005c32420) Reply frame received for 3
I0506 18:51:23.667562       7 log.go:172] (0xc005c32420) (0xc001b0c3c0) Create stream
I0506 18:51:23.667576       7 log.go:172] (0xc005c32420) (0xc001b0c3c0) Stream added, broadcasting: 5
I0506 18:51:23.668432       7 log.go:172] (0xc005c32420) Reply frame received for 5
I0506 18:51:23.729660       7 log.go:172] (0xc005c32420) Data frame received for 3
I0506 18:51:23.729705       7 log.go:172] (0xc001216000) (3) Data frame handling
I0506 18:51:23.729734       7 log.go:172] (0xc001216000) (3) Data frame sent
I0506 18:51:23.729959       7 log.go:172] (0xc005c32420) Data frame received for 5
I0506 18:51:23.729993       7 log.go:172] (0xc001b0c3c0) (5) Data frame handling
I0506 18:51:23.730032       7 log.go:172] (0xc005c32420) Data frame received for 3
I0506 18:51:23.730089       7 log.go:172] (0xc001216000) (3) Data frame handling
I0506 18:51:23.731366       7 log.go:172] (0xc005c32420) Data frame received for 1
I0506 18:51:23.731403       7 log.go:172] (0xc001a22640) (1) Data frame handling
I0506 18:51:23.731422       7 log.go:172] (0xc001a22640) (1) Data frame sent
I0506 18:51:23.731430       7 log.go:172] (0xc005c32420) (0xc001a22640) Stream removed, broadcasting: 1
I0506 18:51:23.731522       7 log.go:172] (0xc005c32420) (0xc001a22640) Stream removed, broadcasting: 1
I0506 18:51:23.731532       7 log.go:172] (0xc005c32420) (0xc001216000) Stream removed, broadcasting: 3
I0506 18:51:23.731619       7 log.go:172] (0xc005c32420) (0xc001b0c3c0) Stream removed, broadcasting: 5
May  6 18:51:23.731: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:51:23.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8746" for this suite.

• [SLOW TEST:24.391 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3562,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:51:23.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-4e526bc6-9d57-44e9-a298-0d50422a1333
STEP: Creating configMap with name cm-test-opt-upd-2055e6f1-e18c-4a18-935d-0b9c3d66b96a
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-4e526bc6-9d57-44e9-a298-0d50422a1333
STEP: Updating configmap cm-test-opt-upd-2055e6f1-e18c-4a18-935d-0b9c3d66b96a
STEP: Creating configMap with name cm-test-opt-create-c5c606f5-c976-4ce1-9753-7cebc637d128
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:51:36.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1005" for this suite.

• [SLOW TEST:12.741 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3577,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:51:36.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:51:36.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7384" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":209,"skipped":3610,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:51:36.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:51:37.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2938" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3619,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:51:37.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  6 18:51:37.795: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9c5571e-4602-4c39-b36e-f6a358caa2f7" in namespace "projected-1424" to be "Succeeded or Failed"
May  6 18:51:37.809: INFO: Pod "downwardapi-volume-c9c5571e-4602-4c39-b36e-f6a358caa2f7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.320396ms
May  6 18:51:40.303: INFO: Pod "downwardapi-volume-c9c5571e-4602-4c39-b36e-f6a358caa2f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.507474201s
May  6 18:51:42.500: INFO: Pod "downwardapi-volume-c9c5571e-4602-4c39-b36e-f6a358caa2f7": Phase="Running", Reason="", readiness=true. Elapsed: 4.704584379s
May  6 18:51:44.503: INFO: Pod "downwardapi-volume-c9c5571e-4602-4c39-b36e-f6a358caa2f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.708063495s
STEP: Saw pod success
May  6 18:51:44.503: INFO: Pod "downwardapi-volume-c9c5571e-4602-4c39-b36e-f6a358caa2f7" satisfied condition "Succeeded or Failed"
May  6 18:51:44.506: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-c9c5571e-4602-4c39-b36e-f6a358caa2f7 container client-container: 
STEP: delete the pod
May  6 18:51:44.578: INFO: Waiting for pod downwardapi-volume-c9c5571e-4602-4c39-b36e-f6a358caa2f7 to disappear
May  6 18:51:44.643: INFO: Pod downwardapi-volume-c9c5571e-4602-4c39-b36e-f6a358caa2f7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:51:44.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1424" for this suite.

• [SLOW TEST:7.516 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3642,"failed":0}
SSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:51:44.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:51:44.872: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-7540c3bf-6a0b-4a39-b132-7107a67a4566" in namespace "security-context-test-5941" to be "Succeeded or Failed"
May  6 18:51:44.918: INFO: Pod "alpine-nnp-false-7540c3bf-6a0b-4a39-b132-7107a67a4566": Phase="Pending", Reason="", readiness=false. Elapsed: 46.051434ms
May  6 18:51:47.260: INFO: Pod "alpine-nnp-false-7540c3bf-6a0b-4a39-b132-7107a67a4566": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38818846s
May  6 18:51:49.488: INFO: Pod "alpine-nnp-false-7540c3bf-6a0b-4a39-b132-7107a67a4566": Phase="Pending", Reason="", readiness=false. Elapsed: 4.615246328s
May  6 18:51:51.491: INFO: Pod "alpine-nnp-false-7540c3bf-6a0b-4a39-b132-7107a67a4566": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.618781703s
May  6 18:51:51.491: INFO: Pod "alpine-nnp-false-7540c3bf-6a0b-4a39-b132-7107a67a4566" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:51:51.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5941" for this suite.

• [SLOW TEST:6.942 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3645,"failed":0}
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:51:51.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:51:59.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3935" for this suite.

• [SLOW TEST:8.239 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3647,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:51:59.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0506 18:52:01.175683       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May  6 18:52:01.175: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:52:01.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1640" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":214,"skipped":3680,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:52:01.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-7884
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-7884
STEP: Creating statefulset with conflicting port in namespace statefulset-7884
STEP: Waiting until pod test-pod will start running in namespace statefulset-7884
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7884
May  6 18:52:10.719: INFO: Observed stateful pod in namespace: statefulset-7884, name: ss-0, uid: 10f58f9e-0d50-4ec7-9cae-96ea666738b1, status phase: Pending. Waiting for statefulset controller to delete.
May  6 18:52:10.746: INFO: Observed stateful pod in namespace: statefulset-7884, name: ss-0, uid: 10f58f9e-0d50-4ec7-9cae-96ea666738b1, status phase: Failed. Waiting for statefulset controller to delete.
May  6 18:52:11.770: INFO: Observed stateful pod in namespace: statefulset-7884, name: ss-0, uid: 10f58f9e-0d50-4ec7-9cae-96ea666738b1, status phase: Failed. Waiting for statefulset controller to delete.
May  6 18:52:12.228: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7884
STEP: Removing pod with conflicting port in namespace statefulset-7884
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7884 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May  6 18:52:19.495: INFO: Deleting all statefulset in ns statefulset-7884
May  6 18:52:19.499: INFO: Scaling statefulset ss to 0
May  6 18:52:29.919: INFO: Waiting for statefulset status.replicas updated to 0
May  6 18:52:29.921: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:52:29.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7884" for this suite.

• [SLOW TEST:28.758 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":215,"skipped":3697,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:52:29.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May  6 18:52:34.583: INFO: Successfully updated pod "labelsupdate1b16db00-8401-4b82-8589-2be11c3d1d29"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:52:38.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-128" for this suite.

• [SLOW TEST:8.758 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3699,"failed":0}
SSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:52:38.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
May  6 18:52:39.925: INFO: mount-test service account has no secret references
STEP: getting the auto-created API token
STEP: reading a file in the container
May  6 18:52:45.362: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6085 pod-service-account-2a54875c-a28e-4899-a174-66223c7b5ab5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
May  6 18:52:45.565: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6085 pod-service-account-2a54875c-a28e-4899-a174-66223c7b5ab5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
May  6 18:52:45.771: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6085 pod-service-account-2a54875c-a28e-4899-a174-66223c7b5ab5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:52:45.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6085" for this suite.

• [SLOW TEST:7.292 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":275,"completed":217,"skipped":3707,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:52:45.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:52:52.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9943" for this suite.

• [SLOW TEST:6.351 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3730,"failed":0}
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:52:52.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  6 18:52:52.766: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a123776e-ab8e-4b49-b6bb-d85b4ac0a295" in namespace "downward-api-2918" to be "Succeeded or Failed"
May  6 18:52:52.871: INFO: Pod "downwardapi-volume-a123776e-ab8e-4b49-b6bb-d85b4ac0a295": Phase="Pending", Reason="", readiness=false. Elapsed: 105.586211ms
May  6 18:52:54.919: INFO: Pod "downwardapi-volume-a123776e-ab8e-4b49-b6bb-d85b4ac0a295": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153221328s
May  6 18:52:56.924: INFO: Pod "downwardapi-volume-a123776e-ab8e-4b49-b6bb-d85b4ac0a295": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158008415s
STEP: Saw pod success
May  6 18:52:56.924: INFO: Pod "downwardapi-volume-a123776e-ab8e-4b49-b6bb-d85b4ac0a295" satisfied condition "Succeeded or Failed"
May  6 18:52:56.927: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-a123776e-ab8e-4b49-b6bb-d85b4ac0a295 container client-container: 
STEP: delete the pod
May  6 18:52:57.126: INFO: Waiting for pod downwardapi-volume-a123776e-ab8e-4b49-b6bb-d85b4ac0a295 to disappear
May  6 18:52:57.135: INFO: Pod downwardapi-volume-a123776e-ab8e-4b49-b6bb-d85b4ac0a295 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:52:57.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2918" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3730,"failed":0}
S
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:52:57.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating secret secrets-5141/secret-test-040818c5-47ce-4df5-b991-905bbfb6af31
STEP: Creating a pod to test consume secrets
May  6 18:52:57.324: INFO: Waiting up to 5m0s for pod "pod-configmaps-70853207-21a1-456b-831d-da2576906b7e" in namespace "secrets-5141" to be "Succeeded or Failed"
May  6 18:52:57.334: INFO: Pod "pod-configmaps-70853207-21a1-456b-831d-da2576906b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.56191ms
May  6 18:52:59.338: INFO: Pod "pod-configmaps-70853207-21a1-456b-831d-da2576906b7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014515615s
May  6 18:53:01.343: INFO: Pod "pod-configmaps-70853207-21a1-456b-831d-da2576906b7e": Phase="Running", Reason="", readiness=true. Elapsed: 4.018844936s
May  6 18:53:03.346: INFO: Pod "pod-configmaps-70853207-21a1-456b-831d-da2576906b7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022679154s
STEP: Saw pod success
May  6 18:53:03.346: INFO: Pod "pod-configmaps-70853207-21a1-456b-831d-da2576906b7e" satisfied condition "Succeeded or Failed"
May  6 18:53:03.349: INFO: Trying to get logs from node kali-worker pod pod-configmaps-70853207-21a1-456b-831d-da2576906b7e container env-test: 
STEP: delete the pod
May  6 18:53:03.538: INFO: Waiting for pod pod-configmaps-70853207-21a1-456b-831d-da2576906b7e to disappear
May  6 18:53:03.574: INFO: Pod pod-configmaps-70853207-21a1-456b-831d-da2576906b7e no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:53:03.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5141" for this suite.

• [SLOW TEST:6.540 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3731,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:53:03.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
May  6 18:53:03.870: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:53:15.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5211" for this suite.

• [SLOW TEST:12.174 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3775,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:53:15.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May  6 18:53:24.239: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:53:24.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5718" for this suite.

• [SLOW TEST:8.736 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3807,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:53:24.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-8620
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 18:53:24.663: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 18:53:24.864: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 18:53:26.868: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 18:53:28.868: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 18:53:30.868: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:53:32.868: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:53:34.868: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:53:37.519: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:53:38.869: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:53:40.868: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:53:43.197: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:53:44.961: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 18:53:46.913: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 18:53:46.919: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 18:53:53.097: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.158:8080/dial?request=hostname&protocol=http&host=10.244.2.135&port=8080&tries=1'] Namespace:pod-network-test-8620 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 18:53:53.097: INFO: >>> kubeConfig: /root/.kube/config
I0506 18:53:53.138932       7 log.go:172] (0xc001ca8dc0) (0xc00166a0a0) Create stream
I0506 18:53:53.138961       7 log.go:172] (0xc001ca8dc0) (0xc00166a0a0) Stream added, broadcasting: 1
I0506 18:53:53.140553       7 log.go:172] (0xc001ca8dc0) Reply frame received for 1
I0506 18:53:53.140598       7 log.go:172] (0xc001ca8dc0) (0xc0004c2820) Create stream
I0506 18:53:53.140614       7 log.go:172] (0xc001ca8dc0) (0xc0004c2820) Stream added, broadcasting: 3
I0506 18:53:53.141602       7 log.go:172] (0xc001ca8dc0) Reply frame received for 3
I0506 18:53:53.141640       7 log.go:172] (0xc001ca8dc0) (0xc000bc40a0) Create stream
I0506 18:53:53.141649       7 log.go:172] (0xc001ca8dc0) (0xc000bc40a0) Stream added, broadcasting: 5
I0506 18:53:53.142468       7 log.go:172] (0xc001ca8dc0) Reply frame received for 5
I0506 18:53:53.206140       7 log.go:172] (0xc001ca8dc0) Data frame received for 3
I0506 18:53:53.206213       7 log.go:172] (0xc0004c2820) (3) Data frame handling
I0506 18:53:53.206244       7 log.go:172] (0xc0004c2820) (3) Data frame sent
I0506 18:53:53.206335       7 log.go:172] (0xc001ca8dc0) Data frame received for 5
I0506 18:53:53.206359       7 log.go:172] (0xc000bc40a0) (5) Data frame handling
I0506 18:53:53.206394       7 log.go:172] (0xc001ca8dc0) Data frame received for 3
I0506 18:53:53.206413       7 log.go:172] (0xc0004c2820) (3) Data frame handling
I0506 18:53:53.209376       7 log.go:172] (0xc001ca8dc0) Data frame received for 1
I0506 18:53:53.209397       7 log.go:172] (0xc00166a0a0) (1) Data frame handling
I0506 18:53:53.209412       7 log.go:172] (0xc00166a0a0) (1) Data frame sent
I0506 18:53:53.209421       7 log.go:172] (0xc001ca8dc0) (0xc00166a0a0) Stream removed, broadcasting: 1
I0506 18:53:53.209431       7 log.go:172] (0xc001ca8dc0) Go away received
I0506 18:53:53.209561       7 log.go:172] (0xc001ca8dc0) (0xc00166a0a0) Stream removed, broadcasting: 1
I0506 18:53:53.209577       7 log.go:172] (0xc001ca8dc0) (0xc0004c2820) Stream removed, broadcasting: 3
I0506 18:53:53.209587       7 log.go:172] (0xc001ca8dc0) (0xc000bc40a0) Stream removed, broadcasting: 5
May  6 18:53:53.209: INFO: Waiting for responses: map[]
May  6 18:53:53.236: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.158:8080/dial?request=hostname&protocol=http&host=10.244.1.157&port=8080&tries=1'] Namespace:pod-network-test-8620 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 18:53:53.236: INFO: >>> kubeConfig: /root/.kube/config
I0506 18:53:53.265859       7 log.go:172] (0xc005c329a0) (0xc00227ea00) Create stream
I0506 18:53:53.265886       7 log.go:172] (0xc005c329a0) (0xc00227ea00) Stream added, broadcasting: 1
I0506 18:53:53.267922       7 log.go:172] (0xc005c329a0) Reply frame received for 1
I0506 18:53:53.267948       7 log.go:172] (0xc005c329a0) (0xc001e9b0e0) Create stream
I0506 18:53:53.267956       7 log.go:172] (0xc005c329a0) (0xc001e9b0e0) Stream added, broadcasting: 3
I0506 18:53:53.268945       7 log.go:172] (0xc005c329a0) Reply frame received for 3
I0506 18:53:53.268975       7 log.go:172] (0xc005c329a0) (0xc00166a3c0) Create stream
I0506 18:53:53.268986       7 log.go:172] (0xc005c329a0) (0xc00166a3c0) Stream added, broadcasting: 5
I0506 18:53:53.269968       7 log.go:172] (0xc005c329a0) Reply frame received for 5
I0506 18:53:53.331314       7 log.go:172] (0xc005c329a0) Data frame received for 3
I0506 18:53:53.331394       7 log.go:172] (0xc001e9b0e0) (3) Data frame handling
I0506 18:53:53.331463       7 log.go:172] (0xc001e9b0e0) (3) Data frame sent
I0506 18:53:53.331888       7 log.go:172] (0xc005c329a0) Data frame received for 3
I0506 18:53:53.331922       7 log.go:172] (0xc001e9b0e0) (3) Data frame handling
I0506 18:53:53.332179       7 log.go:172] (0xc005c329a0) Data frame received for 5
I0506 18:53:53.332198       7 log.go:172] (0xc00166a3c0) (5) Data frame handling
I0506 18:53:53.333895       7 log.go:172] (0xc005c329a0) Data frame received for 1
I0506 18:53:53.333912       7 log.go:172] (0xc00227ea00) (1) Data frame handling
I0506 18:53:53.333923       7 log.go:172] (0xc00227ea00) (1) Data frame sent
I0506 18:53:53.333933       7 log.go:172] (0xc005c329a0) (0xc00227ea00) Stream removed, broadcasting: 1
I0506 18:53:53.333964       7 log.go:172] (0xc005c329a0) Go away received
I0506 18:53:53.334007       7 log.go:172] (0xc005c329a0) (0xc00227ea00) Stream removed, broadcasting: 1
I0506 18:53:53.334021       7 log.go:172] (0xc005c329a0) (0xc001e9b0e0) Stream removed, broadcasting: 3
I0506 18:53:53.334030       7 log.go:172] (0xc005c329a0) (0xc00166a3c0) Stream removed, broadcasting: 5
May  6 18:53:53.334: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:53:53.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8620" for this suite.

• [SLOW TEST:28.743 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3838,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:53:53.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May  6 18:53:54.106: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May  6 18:53:54.302: INFO: Waiting for terminating namespaces to be deleted...
May  6 18:53:55.027: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May  6 18:53:55.541: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  6 18:53:55.541: INFO: 	Container kindnet-cni ready: true, restart count 1
May  6 18:53:55.541: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  6 18:53:55.541: INFO: 	Container kube-proxy ready: true, restart count 0
May  6 18:53:55.541: INFO: netserver-0 from pod-network-test-8620 started at 2020-05-06 18:53:24 +0000 UTC (1 container statuses recorded)
May  6 18:53:55.541: INFO: 	Container webserver ready: true, restart count 0
May  6 18:53:55.541: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May  6 18:53:55.611: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  6 18:53:55.611: INFO: 	Container kube-proxy ready: true, restart count 0
May  6 18:53:55.611: INFO: netserver-1 from pod-network-test-8620 started at 2020-05-06 18:53:24 +0000 UTC (1 container statuses recorded)
May  6 18:53:55.611: INFO: 	Container webserver ready: true, restart count 0
May  6 18:53:55.611: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  6 18:53:55.611: INFO: 	Container kindnet-cni ready: true, restart count 0
May  6 18:53:55.611: INFO: test-container-pod from pod-network-test-8620 started at 2020-05-06 18:53:47 +0000 UTC (1 container statuses recorded)
May  6 18:53:55.611: INFO: 	Container webserver ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-1f87b83e-cb86-4a50-9deb-02d9422bcbe9 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-1f87b83e-cb86-4a50-9deb-02d9422bcbe9 off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-1f87b83e-cb86-4a50-9deb-02d9422bcbe9
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:54:14.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5813" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:21.583 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":224,"skipped":3852,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:54:14.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6463
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-6463
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6463
May  6 18:54:15.705: INFO: Found 0 stateful pods, waiting for 1
May  6 18:54:25.709: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
May  6 18:54:25.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6463 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  6 18:54:25.977: INFO: stderr: "I0506 18:54:25.837705    3463 log.go:172] (0xc00099c000) (0xc00082b2c0) Create stream\nI0506 18:54:25.837764    3463 log.go:172] (0xc00099c000) (0xc00082b2c0) Stream added, broadcasting: 1\nI0506 18:54:25.839275    3463 log.go:172] (0xc00099c000) Reply frame received for 1\nI0506 18:54:25.839306    3463 log.go:172] (0xc00099c000) (0xc00082b4a0) Create stream\nI0506 18:54:25.839315    3463 log.go:172] (0xc00099c000) (0xc00082b4a0) Stream added, broadcasting: 3\nI0506 18:54:25.840160    3463 log.go:172] (0xc00099c000) Reply frame received for 3\nI0506 18:54:25.840209    3463 log.go:172] (0xc00099c000) (0xc000970280) Create stream\nI0506 18:54:25.840220    3463 log.go:172] (0xc00099c000) (0xc000970280) Stream added, broadcasting: 5\nI0506 18:54:25.840943    3463 log.go:172] (0xc00099c000) Reply frame received for 5\nI0506 18:54:25.891028    3463 log.go:172] (0xc00099c000) Data frame received for 5\nI0506 18:54:25.891062    3463 log.go:172] (0xc000970280) (5) Data frame handling\nI0506 18:54:25.891085    3463 log.go:172] (0xc000970280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 18:54:25.968484    3463 log.go:172] (0xc00099c000) Data frame received for 3\nI0506 18:54:25.968511    3463 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0506 18:54:25.968526    3463 log.go:172] (0xc00082b4a0) (3) Data frame sent\nI0506 18:54:25.968538    3463 log.go:172] (0xc00099c000) Data frame received for 3\nI0506 18:54:25.968543    3463 log.go:172] (0xc00082b4a0) (3) Data frame handling\nI0506 18:54:25.969013    3463 log.go:172] (0xc00099c000) Data frame received for 5\nI0506 18:54:25.969025    3463 log.go:172] (0xc000970280) (5) Data frame handling\nI0506 18:54:25.970421    3463 log.go:172] (0xc00099c000) Data frame received for 1\nI0506 18:54:25.970436    3463 log.go:172] (0xc00082b2c0) (1) Data frame handling\nI0506 18:54:25.970442    3463 log.go:172] (0xc00082b2c0) (1) Data frame sent\nI0506 18:54:25.970454    3463 log.go:172] (0xc00099c000) (0xc00082b2c0) Stream removed, broadcasting: 1\nI0506 18:54:25.970514    3463 log.go:172] (0xc00099c000) Go away received\nI0506 18:54:25.970742    3463 log.go:172] (0xc00099c000) (0xc00082b2c0) Stream removed, broadcasting: 1\nI0506 18:54:25.970756    3463 log.go:172] (0xc00099c000) (0xc00082b4a0) Stream removed, broadcasting: 3\nI0506 18:54:25.970762    3463 log.go:172] (0xc00099c000) (0xc000970280) Stream removed, broadcasting: 5\n"
May  6 18:54:25.978: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  6 18:54:25.978: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  6 18:54:25.985: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
May  6 18:54:36.022: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May  6 18:54:36.022: INFO: Waiting for statefulset status.replicas updated to 0
May  6 18:54:36.074: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999365s
May  6 18:54:37.079: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.956417492s
May  6 18:54:38.244: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.951418978s
May  6 18:54:39.633: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.786473207s
May  6 18:54:41.172: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.396916552s
May  6 18:54:42.176: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.858210936s
May  6 18:54:43.179: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.854270242s
May  6 18:54:44.185: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.850277314s
May  6 18:54:45.192: INFO: Verifying statefulset ss doesn't scale past 1 for another 845.24314ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6463
May  6 18:54:46.198: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6463 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  6 18:54:46.627: INFO: stderr: "I0506 18:54:46.547725    3485 log.go:172] (0xc00003a0b0) (0xc000843400) Create stream\nI0506 18:54:46.547790    3485 log.go:172] (0xc00003a0b0) (0xc000843400) Stream added, broadcasting: 1\nI0506 18:54:46.556538    3485 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0506 18:54:46.556569    3485 log.go:172] (0xc00003a0b0) (0xc0009b6000) Create stream\nI0506 18:54:46.556579    3485 log.go:172] (0xc00003a0b0) (0xc0009b6000) Stream added, broadcasting: 3\nI0506 18:54:46.558015    3485 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0506 18:54:46.558086    3485 log.go:172] (0xc00003a0b0) (0xc00002a000) Create stream\nI0506 18:54:46.558101    3485 log.go:172] (0xc00003a0b0) (0xc00002a000) Stream added, broadcasting: 5\nI0506 18:54:46.558771    3485 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0506 18:54:46.620152    3485 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0506 18:54:46.620188    3485 log.go:172] (0xc0009b6000) (3) Data frame handling\nI0506 18:54:46.620236    3485 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0506 18:54:46.620285    3485 log.go:172] (0xc00002a000) (5) Data frame handling\nI0506 18:54:46.620309    3485 log.go:172] (0xc00002a000) (5) Data frame sent\nI0506 18:54:46.620334    3485 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0506 18:54:46.620355    3485 log.go:172] (0xc00002a000) (5) Data frame handling\nI0506 18:54:46.620380    3485 log.go:172] (0xc0009b6000) (3) Data frame sent\nI0506 18:54:46.620404    3485 log.go:172] (0xc00003a0b0) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 18:54:46.620427    3485 log.go:172] (0xc0009b6000) (3) Data frame handling\nI0506 18:54:46.622087    3485 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0506 18:54:46.622113    3485 log.go:172] (0xc000843400) (1) Data frame handling\nI0506 18:54:46.622132    3485 log.go:172] (0xc000843400) (1) Data frame sent\nI0506 18:54:46.622154    3485 log.go:172] (0xc00003a0b0) (0xc000843400) Stream removed, broadcasting: 1\nI0506 18:54:46.622173    3485 log.go:172] (0xc00003a0b0) Go away received\nI0506 18:54:46.622513    3485 log.go:172] (0xc00003a0b0) (0xc000843400) Stream removed, broadcasting: 1\nI0506 18:54:46.622528    3485 log.go:172] (0xc00003a0b0) (0xc0009b6000) Stream removed, broadcasting: 3\nI0506 18:54:46.622536    3485 log.go:172] (0xc00003a0b0) (0xc00002a000) Stream removed, broadcasting: 5\n"
May  6 18:54:46.627: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  6 18:54:46.627: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  6 18:54:46.631: INFO: Found 1 stateful pods, waiting for 3
May  6 18:54:56.850: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
May  6 18:54:56.850: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
May  6 18:54:56.850: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
May  6 18:55:06.636: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
May  6 18:55:06.636: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
May  6 18:55:06.636: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
May  6 18:55:06.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6463 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  6 18:55:06.845: INFO: stderr: "I0506 18:55:06.777625    3506 log.go:172] (0xc000419290) (0xc0006c39a0) Create stream\nI0506 18:55:06.777680    3506 log.go:172] (0xc000419290) (0xc0006c39a0) Stream added, broadcasting: 1\nI0506 18:55:06.779792    3506 log.go:172] (0xc000419290) Reply frame received for 1\nI0506 18:55:06.779824    3506 log.go:172] (0xc000419290) (0xc0006b30e0) Create stream\nI0506 18:55:06.779835    3506 log.go:172] (0xc000419290) (0xc0006b30e0) Stream added, broadcasting: 3\nI0506 18:55:06.780761    3506 log.go:172] (0xc000419290) Reply frame received for 3\nI0506 18:55:06.780812    3506 log.go:172] (0xc000419290) (0xc0005a59a0) Create stream\nI0506 18:55:06.780834    3506 log.go:172] (0xc000419290) (0xc0005a59a0) Stream added, broadcasting: 5\nI0506 18:55:06.781869    3506 log.go:172] (0xc000419290) Reply frame received for 5\nI0506 18:55:06.839871    3506 log.go:172] (0xc000419290) Data frame received for 3\nI0506 18:55:06.839902    3506 log.go:172] (0xc0006b30e0) (3) Data frame handling\nI0506 18:55:06.839924    3506 log.go:172] (0xc0006b30e0) (3) Data frame sent\nI0506 18:55:06.839942    3506 log.go:172] (0xc000419290) Data frame received for 5\nI0506 18:55:06.839956    3506 log.go:172] (0xc0005a59a0) (5) Data frame handling\nI0506 18:55:06.839963    3506 log.go:172] (0xc0005a59a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 18:55:06.839974    3506 log.go:172] (0xc000419290) Data frame received for 3\nI0506 18:55:06.839980    3506 log.go:172] (0xc0006b30e0) (3) Data frame handling\nI0506 18:55:06.840149    3506 log.go:172] (0xc000419290) Data frame received for 5\nI0506 18:55:06.840178    3506 log.go:172] (0xc0005a59a0) (5) Data frame handling\nI0506 18:55:06.841775    3506 log.go:172] (0xc000419290) Data frame received for 1\nI0506 18:55:06.841801    3506 log.go:172] (0xc0006c39a0) (1) Data frame handling\nI0506 18:55:06.841817    3506 log.go:172] (0xc0006c39a0) (1) Data frame sent\nI0506 18:55:06.841827    3506 log.go:172] (0xc000419290) (0xc0006c39a0) Stream removed, broadcasting: 1\nI0506 18:55:06.841927    3506 log.go:172] (0xc000419290) Go away received\nI0506 18:55:06.842150    3506 log.go:172] (0xc000419290) (0xc0006c39a0) Stream removed, broadcasting: 1\nI0506 18:55:06.842177    3506 log.go:172] (0xc000419290) (0xc0006b30e0) Stream removed, broadcasting: 3\nI0506 18:55:06.842190    3506 log.go:172] (0xc000419290) (0xc0005a59a0) Stream removed, broadcasting: 5\n"
May  6 18:55:06.846: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  6 18:55:06.846: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  6 18:55:06.846: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6463 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  6 18:55:07.901: INFO: stderr: "I0506 18:55:06.995082    3529 log.go:172] (0xc0009b7600) (0xc0009a28c0) Create stream\nI0506 18:55:06.995148    3529 log.go:172] (0xc0009b7600) (0xc0009a28c0) Stream added, broadcasting: 1\nI0506 18:55:07.004961    3529 log.go:172] (0xc0009b7600) Reply frame received for 1\nI0506 18:55:07.005150    3529 log.go:172] (0xc0009b7600) (0xc0005b75e0) Create stream\nI0506 18:55:07.005229    3529 log.go:172] (0xc0009b7600) (0xc0005b75e0) Stream added, broadcasting: 3\nI0506 18:55:07.006652    3529 log.go:172] (0xc0009b7600) Reply frame received for 3\nI0506 18:55:07.006727    3529 log.go:172] (0xc0009b7600) (0xc0004eca00) Create stream\nI0506 18:55:07.006764    3529 log.go:172] (0xc0009b7600) (0xc0004eca00) Stream added, broadcasting: 5\nI0506 18:55:07.007586    3529 log.go:172] (0xc0009b7600) Reply frame received for 5\nI0506 18:55:07.061047    3529 log.go:172] (0xc0009b7600) Data frame received for 5\nI0506 18:55:07.061095    3529 log.go:172] (0xc0004eca00) (5) Data frame handling\nI0506 18:55:07.061351    3529 log.go:172] (0xc0004eca00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 18:55:07.893640    3529 log.go:172] (0xc0009b7600) Data frame received for 3\nI0506 18:55:07.893796    3529 log.go:172] (0xc0005b75e0) (3) Data frame handling\nI0506 18:55:07.893908    3529 log.go:172] (0xc0005b75e0) (3) Data frame sent\nI0506 18:55:07.894091    3529 log.go:172] (0xc0009b7600) Data frame received for 3\nI0506 18:55:07.894329    3529 log.go:172] (0xc0005b75e0) (3) Data frame handling\nI0506 18:55:07.894393    3529 log.go:172] (0xc0009b7600) Data frame received for 5\nI0506 18:55:07.894406    3529 log.go:172] (0xc0004eca00) (5) Data frame handling\nI0506 18:55:07.897001    3529 log.go:172] (0xc0009b7600) Data frame received for 1\nI0506 18:55:07.897283    3529 log.go:172] (0xc0009a28c0) (1) Data frame handling\nI0506 18:55:07.897384    3529 log.go:172] (0xc0009a28c0) (1) Data frame sent\nI0506 18:55:07.897706    3529 log.go:172] (0xc0009b7600) (0xc0009a28c0) Stream removed, broadcasting: 1\nI0506 18:55:07.898088    3529 log.go:172] (0xc0009b7600) (0xc0009a28c0) Stream removed, broadcasting: 1\nI0506 18:55:07.898113    3529 log.go:172] (0xc0009b7600) (0xc0005b75e0) Stream removed, broadcasting: 3\nI0506 18:55:07.898300    3529 log.go:172] (0xc0009b7600) (0xc0004eca00) Stream removed, broadcasting: 5\n"
May  6 18:55:07.901: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  6 18:55:07.901: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  6 18:55:07.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6463 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  6 18:55:08.578: INFO: stderr: "I0506 18:55:08.262636    3551 log.go:172] (0xc000ae6000) (0xc0009a8000) Create stream\nI0506 18:55:08.262722    3551 log.go:172] (0xc000ae6000) (0xc0009a8000) Stream added, broadcasting: 1\nI0506 18:55:08.265650    3551 log.go:172] (0xc000ae6000) Reply frame received for 1\nI0506 18:55:08.265697    3551 log.go:172] (0xc000ae6000) (0xc000abc320) Create stream\nI0506 18:55:08.265711    3551 log.go:172] (0xc000ae6000) (0xc000abc320) Stream added, broadcasting: 3\nI0506 18:55:08.266786    3551 log.go:172] (0xc000ae6000) Reply frame received for 3\nI0506 18:55:08.266833    3551 log.go:172] (0xc000ae6000) (0xc0009a81e0) Create stream\nI0506 18:55:08.266848    3551 log.go:172] (0xc000ae6000) (0xc0009a81e0) Stream added, broadcasting: 5\nI0506 18:55:08.267703    3551 log.go:172] (0xc000ae6000) Reply frame received for 5\nI0506 18:55:08.326460    3551 log.go:172] (0xc000ae6000) Data frame received for 5\nI0506 18:55:08.326486    3551 log.go:172] (0xc0009a81e0) (5) Data frame handling\nI0506 18:55:08.326503    3551 log.go:172] (0xc0009a81e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 18:55:08.571408    3551 log.go:172] (0xc000ae6000) Data frame received for 3\nI0506 18:55:08.571434    3551 log.go:172] (0xc000abc320) (3) Data frame handling\nI0506 18:55:08.571443    3551 log.go:172] (0xc000abc320) (3) Data frame sent\nI0506 18:55:08.571448    3551 log.go:172] (0xc000ae6000) Data frame received for 3\nI0506 18:55:08.571453    3551 log.go:172] (0xc000abc320) (3) Data frame handling\nI0506 18:55:08.571534    3551 log.go:172] (0xc000ae6000) Data frame received for 5\nI0506 18:55:08.571545    3551 log.go:172] (0xc0009a81e0) (5) Data frame handling\nI0506 18:55:08.573915    3551 log.go:172] (0xc000ae6000) Data frame received for 1\nI0506 18:55:08.573944    3551 log.go:172] (0xc0009a8000) (1) Data frame handling\nI0506 18:55:08.573980    3551 log.go:172] (0xc0009a8000) (1) Data frame sent\nI0506 18:55:08.574007    3551 log.go:172] (0xc000ae6000) (0xc0009a8000) Stream removed, broadcasting: 1\nI0506 18:55:08.574029    3551 log.go:172] (0xc000ae6000) Go away received\nI0506 18:55:08.574295    3551 log.go:172] (0xc000ae6000) (0xc0009a8000) Stream removed, broadcasting: 1\nI0506 18:55:08.574308    3551 log.go:172] (0xc000ae6000) (0xc000abc320) Stream removed, broadcasting: 3\nI0506 18:55:08.574314    3551 log.go:172] (0xc000ae6000) (0xc0009a81e0) Stream removed, broadcasting: 5\n"
May  6 18:55:08.578: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  6 18:55:08.578: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  6 18:55:08.578: INFO: Waiting for statefulset status.replicas updated to 0
May  6 18:55:08.581: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
May  6 18:55:18.593: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
May  6 18:55:18.593: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
May  6 18:55:18.593: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
May  6 18:55:18.747: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999224s
May  6 18:55:19.813: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.97657145s
May  6 18:55:21.095: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.910150841s
May  6 18:55:22.112: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.628804902s
May  6 18:55:23.118: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.61139288s
May  6 18:55:24.123: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.606026617s
May  6 18:55:25.380: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.600414747s
May  6 18:55:26.385: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.343793511s
May  6 18:55:27.390: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.338761331s
May  6 18:55:28.395: INFO: Verifying statefulset ss doesn't scale past 3 for another 333.439933ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6463
May  6 18:55:29.400: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6463 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  6 18:55:29.606: INFO: stderr: "I0506 18:55:29.528167    3570 log.go:172] (0xc00003a840) (0xc0006d9540) Create stream\nI0506 18:55:29.528225    3570 log.go:172] (0xc00003a840) (0xc0006d9540) Stream added, broadcasting: 1\nI0506 18:55:29.530777    3570 log.go:172] (0xc00003a840) Reply frame received for 1\nI0506 18:55:29.530808    3570 log.go:172] (0xc00003a840) (0xc0009e6000) Create stream\nI0506 18:55:29.530822    3570 log.go:172] (0xc00003a840) (0xc0009e6000) Stream added, broadcasting: 3\nI0506 18:55:29.531534    3570 log.go:172] (0xc00003a840) Reply frame received for 3\nI0506 18:55:29.531578    3570 log.go:172] (0xc00003a840) (0xc0006d95e0) Create stream\nI0506 18:55:29.531589    3570 log.go:172] (0xc00003a840) (0xc0006d95e0) Stream added, broadcasting: 5\nI0506 18:55:29.532320    3570 log.go:172] (0xc00003a840) Reply frame received for 5\nI0506 18:55:29.599947    3570 log.go:172] (0xc00003a840) Data frame received for 3\nI0506 18:55:29.599993    3570 log.go:172] (0xc0009e6000) (3) Data frame handling\nI0506 18:55:29.600007    3570 log.go:172] (0xc0009e6000) (3) Data frame sent\nI0506 18:55:29.600024    3570 log.go:172] (0xc00003a840) Data frame received for 3\nI0506 18:55:29.600033    3570 log.go:172] (0xc0009e6000) (3) Data frame handling\nI0506 18:55:29.600068    3570 log.go:172] (0xc00003a840) Data frame received for 5\nI0506 18:55:29.600080    3570 log.go:172] (0xc0006d95e0) (5) Data frame handling\nI0506 18:55:29.600093    3570 log.go:172] (0xc0006d95e0) (5) Data frame sent\nI0506 18:55:29.600108    3570 log.go:172] (0xc00003a840) Data frame received for 5\nI0506 18:55:29.600119    3570 log.go:172] (0xc0006d95e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 18:55:29.602001    3570 log.go:172] (0xc00003a840) Data frame received for 1\nI0506 18:55:29.602032    3570 log.go:172] (0xc0006d9540) (1) Data frame handling\nI0506 18:55:29.602079    3570 log.go:172] (0xc0006d9540) (1) Data frame sent\nI0506 18:55:29.602105    3570 log.go:172] (0xc00003a840) (0xc0006d9540) Stream removed, broadcasting: 1\nI0506 18:55:29.602149    3570 log.go:172] (0xc00003a840) Go away received\nI0506 18:55:29.602530    3570 log.go:172] (0xc00003a840) (0xc0006d9540) Stream removed, broadcasting: 1\nI0506 18:55:29.602549    3570 log.go:172] (0xc00003a840) (0xc0009e6000) Stream removed, broadcasting: 3\nI0506 18:55:29.602560    3570 log.go:172] (0xc00003a840) (0xc0006d95e0) Stream removed, broadcasting: 5\n"
May  6 18:55:29.606: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  6 18:55:29.606: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  6 18:55:29.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6463 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  6 18:55:29.830: INFO: stderr: "I0506 18:55:29.752784    3590 log.go:172] (0xc000a28160) (0xc00056ea00) Create stream\nI0506 18:55:29.752851    3590 log.go:172] (0xc000a28160) (0xc00056ea00) Stream added, broadcasting: 1\nI0506 18:55:29.755477    3590 log.go:172] (0xc000a28160) Reply frame received for 1\nI0506 18:55:29.755510    3590 log.go:172] (0xc000a28160) (0xc0009e8000) Create stream\nI0506 18:55:29.755520    3590 log.go:172] (0xc000a28160) (0xc0009e8000) Stream added, broadcasting: 3\nI0506 18:55:29.756302    3590 log.go:172] (0xc000a28160) Reply frame received for 3\nI0506 18:55:29.756347    3590 log.go:172] (0xc000a28160) (0xc00096c000) Create stream\nI0506 18:55:29.756372    3590 log.go:172] (0xc000a28160) (0xc00096c000) Stream added, broadcasting: 5\nI0506 18:55:29.757271    3590 log.go:172] (0xc000a28160) Reply frame received for 5\nI0506 18:55:29.822478    3590 log.go:172] (0xc000a28160) Data frame received for 5\nI0506 18:55:29.822522    3590 log.go:172] (0xc00096c000) (5) Data frame handling\nI0506 18:55:29.822543    3590 log.go:172] (0xc00096c000) (5) Data frame sent\nI0506 18:55:29.822558    3590 log.go:172] (0xc000a28160) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 18:55:29.822580    3590 log.go:172] (0xc000a28160) Data frame received for 3\nI0506 18:55:29.822615    3590 log.go:172] (0xc0009e8000) (3) Data frame handling\nI0506 18:55:29.822638    3590 log.go:172] (0xc0009e8000) (3) Data frame sent\nI0506 18:55:29.822659    3590 log.go:172] (0xc000a28160) Data frame received for 3\nI0506 18:55:29.822677    3590 log.go:172] (0xc0009e8000) (3) Data frame handling\nI0506 18:55:29.822714    3590 log.go:172] (0xc00096c000) (5) Data frame handling\nI0506 18:55:29.824809    3590 log.go:172] (0xc000a28160) Data frame received for 1\nI0506 18:55:29.824838    3590 log.go:172] (0xc00056ea00) (1) Data frame handling\nI0506 18:55:29.824851    3590 log.go:172] (0xc00056ea00) (1) Data frame sent\nI0506 18:55:29.824866    3590 log.go:172] (0xc000a28160) (0xc00056ea00) Stream removed, broadcasting: 1\nI0506 18:55:29.824894    3590 log.go:172] (0xc000a28160) Go away received\nI0506 18:55:29.825434    3590 log.go:172] (0xc000a28160) (0xc00056ea00) Stream removed, broadcasting: 1\nI0506 18:55:29.825460    3590 log.go:172] (0xc000a28160) (0xc0009e8000) Stream removed, broadcasting: 3\nI0506 18:55:29.825474    3590 log.go:172] (0xc000a28160) (0xc00096c000) Stream removed, broadcasting: 5\n"
May  6 18:55:29.830: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  6 18:55:29.830: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  6 18:55:29.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6463 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  6 18:55:30.043: INFO: stderr: "I0506 18:55:29.969859    3611 log.go:172] (0xc000aecd10) (0xc000b183c0) Create stream\nI0506 18:55:29.969906    3611 log.go:172] (0xc000aecd10) (0xc000b183c0) Stream added, broadcasting: 1\nI0506 18:55:29.977654    3611 log.go:172] (0xc000aecd10) Reply frame received for 1\nI0506 18:55:29.977699    3611 log.go:172] (0xc000aecd10) (0xc000ada0a0) Create stream\nI0506 18:55:29.977721    3611 log.go:172] (0xc000aecd10) (0xc000ada0a0) Stream added, broadcasting: 3\nI0506 18:55:29.978530    3611 log.go:172] (0xc000aecd10) Reply frame received for 3\nI0506 18:55:29.978555    3611 log.go:172] (0xc000aecd10) (0xc000b18460) Create stream\nI0506 18:55:29.978563    3611 log.go:172] (0xc000aecd10) (0xc000b18460) Stream added, broadcasting: 5\nI0506 18:55:29.979383    3611 log.go:172] (0xc000aecd10) Reply frame received for 5\nI0506 18:55:30.038582    3611 log.go:172] (0xc000aecd10) Data frame received for 5\nI0506 18:55:30.038638    3611 log.go:172] (0xc000b18460) (5) Data frame handling\nI0506 18:55:30.038660    3611 log.go:172] (0xc000b18460) (5) Data frame sent\nI0506 18:55:30.038693    3611 log.go:172] (0xc000aecd10) Data frame received for 5\nI0506 18:55:30.038705    3611 log.go:172] (0xc000b18460) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 18:55:30.038733    3611 log.go:172] (0xc000aecd10) Data frame received for 3\nI0506 18:55:30.038753    3611 log.go:172] (0xc000ada0a0) (3) Data frame handling\nI0506 18:55:30.038770    3611 log.go:172] (0xc000ada0a0) (3) Data frame sent\nI0506 18:55:30.038782    3611 log.go:172] (0xc000aecd10) Data frame received for 3\nI0506 18:55:30.038797    3611 log.go:172] (0xc000ada0a0) (3) Data frame handling\nI0506 18:55:30.039866    3611 log.go:172] (0xc000aecd10) Data frame received for 1\nI0506 18:55:30.039893    3611 log.go:172] (0xc000b183c0) (1) Data frame handling\nI0506 18:55:30.039911    3611 log.go:172] (0xc000b183c0) (1) Data frame sent\nI0506 18:55:30.039924    3611 log.go:172] (0xc000aecd10) (0xc000b183c0) Stream removed, broadcasting: 1\nI0506 18:55:30.039940    3611 log.go:172] (0xc000aecd10) Go away received\nI0506 18:55:30.040185    3611 log.go:172] (0xc000aecd10) (0xc000b183c0) Stream removed, broadcasting: 1\nI0506 18:55:30.040203    3611 log.go:172] (0xc000aecd10) (0xc000ada0a0) Stream removed, broadcasting: 3\nI0506 18:55:30.040210    3611 log.go:172] (0xc000aecd10) (0xc000b18460) Stream removed, broadcasting: 5\n"
May  6 18:55:30.043: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  6 18:55:30.043: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  6 18:55:30.043: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May  6 18:56:10.059: INFO: Deleting all statefulset in ns statefulset-6463
May  6 18:56:10.063: INFO: Scaling statefulset ss to 0
May  6 18:56:10.099: INFO: Waiting for statefulset status.replicas updated to 0
May  6 18:56:10.103: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:56:10.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6463" for this suite.

• [SLOW TEST:115.207 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":225,"skipped":3864,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:56:10.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-8304
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-8304
I0506 18:56:10.441315       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8304, replica count: 2
I0506 18:56:13.491727       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 18:56:16.491948       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  6 18:56:16.492: INFO: Creating new exec pod
May  6 18:56:27.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8304 execpod4vqkw -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
May  6 18:56:27.316: INFO: stderr: "I0506 18:56:27.227602    3631 log.go:172] (0xc00072cb00) (0xc000728280) Create stream\nI0506 18:56:27.227668    3631 log.go:172] (0xc00072cb00) (0xc000728280) Stream added, broadcasting: 1\nI0506 18:56:27.229986    3631 log.go:172] (0xc00072cb00) Reply frame received for 1\nI0506 18:56:27.230025    3631 log.go:172] (0xc00072cb00) (0xc0008074a0) Create stream\nI0506 18:56:27.230038    3631 log.go:172] (0xc00072cb00) (0xc0008074a0) Stream added, broadcasting: 3\nI0506 18:56:27.230786    3631 log.go:172] (0xc00072cb00) Reply frame received for 3\nI0506 18:56:27.230816    3631 log.go:172] (0xc00072cb00) (0xc000728320) Create stream\nI0506 18:56:27.230828    3631 log.go:172] (0xc00072cb00) (0xc000728320) Stream added, broadcasting: 5\nI0506 18:56:27.231691    3631 log.go:172] (0xc00072cb00) Reply frame received for 5\nI0506 18:56:27.308807    3631 log.go:172] (0xc00072cb00) Data frame received for 5\nI0506 18:56:27.308850    3631 log.go:172] (0xc000728320) (5) Data frame handling\nI0506 18:56:27.308877    3631 log.go:172] (0xc000728320) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0506 18:56:27.309280    3631 log.go:172] (0xc00072cb00) Data frame received for 5\nI0506 18:56:27.309303    3631 log.go:172] (0xc000728320) (5) Data frame handling\nI0506 18:56:27.309313    3631 log.go:172] (0xc000728320) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0506 18:56:27.309640    3631 log.go:172] (0xc00072cb00) Data frame received for 5\nI0506 18:56:27.309659    3631 log.go:172] (0xc000728320) (5) Data frame handling\nI0506 18:56:27.310067    3631 log.go:172] (0xc00072cb00) Data frame received for 3\nI0506 18:56:27.310083    3631 log.go:172] (0xc0008074a0) (3) Data frame handling\nI0506 18:56:27.311806    3631 log.go:172] (0xc00072cb00) Data frame received for 1\nI0506 18:56:27.311835    3631 log.go:172] (0xc000728280) (1) Data frame handling\nI0506 18:56:27.311848    3631 log.go:172] (0xc000728280) (1) Data frame sent\nI0506 18:56:27.311866    3631 log.go:172] (0xc00072cb00) (0xc000728280) Stream removed, broadcasting: 1\nI0506 18:56:27.311889    3631 log.go:172] (0xc00072cb00) Go away received\nI0506 18:56:27.312300    3631 log.go:172] (0xc00072cb00) (0xc000728280) Stream removed, broadcasting: 1\nI0506 18:56:27.312325    3631 log.go:172] (0xc00072cb00) (0xc0008074a0) Stream removed, broadcasting: 3\nI0506 18:56:27.312338    3631 log.go:172] (0xc00072cb00) (0xc000728320) Stream removed, broadcasting: 5\n"
May  6 18:56:27.316: INFO: stdout: ""
May  6 18:56:27.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-8304 execpod4vqkw -- /bin/sh -x -c nc -zv -t -w 2 10.104.221.199 80'
May  6 18:56:27.511: INFO: stderr: "I0506 18:56:27.445873    3653 log.go:172] (0xc000586000) (0xc000a300a0) Create stream\nI0506 18:56:27.445924    3653 log.go:172] (0xc000586000) (0xc000a300a0) Stream added, broadcasting: 1\nI0506 18:56:27.448516    3653 log.go:172] (0xc000586000) Reply frame received for 1\nI0506 18:56:27.448558    3653 log.go:172] (0xc000586000) (0xc000a30140) Create stream\nI0506 18:56:27.448571    3653 log.go:172] (0xc000586000) (0xc000a30140) Stream added, broadcasting: 3\nI0506 18:56:27.449756    3653 log.go:172] (0xc000586000) Reply frame received for 3\nI0506 18:56:27.449817    3653 log.go:172] (0xc000586000) (0xc0006e1220) Create stream\nI0506 18:56:27.449847    3653 log.go:172] (0xc000586000) (0xc0006e1220) Stream added, broadcasting: 5\nI0506 18:56:27.450944    3653 log.go:172] (0xc000586000) Reply frame received for 5\nI0506 18:56:27.505354    3653 log.go:172] (0xc000586000) Data frame received for 5\nI0506 18:56:27.505414    3653 log.go:172] (0xc0006e1220) (5) Data frame handling\nI0506 18:56:27.505437    3653 log.go:172] (0xc0006e1220) (5) Data frame sent\nI0506 18:56:27.505452    3653 log.go:172] (0xc000586000) Data frame received for 5\nI0506 18:56:27.505467    3653 log.go:172] (0xc0006e1220) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.221.199 80\nConnection to 10.104.221.199 80 port [tcp/http] succeeded!\nI0506 18:56:27.505501    3653 log.go:172] (0xc000586000) Data frame received for 3\nI0506 18:56:27.505544    3653 log.go:172] (0xc000a30140) (3) Data frame handling\nI0506 18:56:27.506879    3653 log.go:172] (0xc000586000) Data frame received for 1\nI0506 18:56:27.506906    3653 log.go:172] (0xc000a300a0) (1) Data frame handling\nI0506 18:56:27.506937    3653 log.go:172] (0xc000a300a0) (1) Data frame sent\nI0506 18:56:27.506957    3653 log.go:172] (0xc000586000) (0xc000a300a0) Stream removed, broadcasting: 1\nI0506 18:56:27.506993    3653 log.go:172] (0xc000586000) Go away received\nI0506 18:56:27.507342    3653 log.go:172] (0xc000586000) (0xc000a300a0) Stream removed, broadcasting: 1\nI0506 18:56:27.507374    3653 log.go:172] (0xc000586000) (0xc000a30140) Stream removed, broadcasting: 3\nI0506 18:56:27.507394    3653 log.go:172] (0xc000586000) (0xc0006e1220) Stream removed, broadcasting: 5\n"
May  6 18:56:27.511: INFO: stdout: ""
May  6 18:56:27.511: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:56:27.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8304" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:17.414 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":226,"skipped":3889,"failed":0}
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:56:27.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-37457726-4c1e-469c-91a7-e5fba78c969c
STEP: Creating a pod to test consume secrets
May  6 18:56:27.635: INFO: Waiting up to 5m0s for pod "pod-secrets-73316336-79a2-498b-aeb7-0fa131a60870" in namespace "secrets-2190" to be "Succeeded or Failed"
May  6 18:56:27.639: INFO: Pod "pod-secrets-73316336-79a2-498b-aeb7-0fa131a60870": Phase="Pending", Reason="", readiness=false. Elapsed: 3.946343ms
May  6 18:56:29.644: INFO: Pod "pod-secrets-73316336-79a2-498b-aeb7-0fa131a60870": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008492609s
May  6 18:56:31.648: INFO: Pod "pod-secrets-73316336-79a2-498b-aeb7-0fa131a60870": Phase="Running", Reason="", readiness=true. Elapsed: 4.012466641s
May  6 18:56:33.651: INFO: Pod "pod-secrets-73316336-79a2-498b-aeb7-0fa131a60870": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015938971s
STEP: Saw pod success
May  6 18:56:33.651: INFO: Pod "pod-secrets-73316336-79a2-498b-aeb7-0fa131a60870" satisfied condition "Succeeded or Failed"
May  6 18:56:33.654: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-73316336-79a2-498b-aeb7-0fa131a60870 container secret-volume-test: 
STEP: delete the pod
May  6 18:56:33.744: INFO: Waiting for pod pod-secrets-73316336-79a2-498b-aeb7-0fa131a60870 to disappear
May  6 18:56:33.751: INFO: Pod pod-secrets-73316336-79a2-498b-aeb7-0fa131a60870 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:56:33.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2190" for this suite.

• [SLOW TEST:6.211 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":3895,"failed":0}
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:56:33.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
May  6 18:56:34.007: INFO: Waiting up to 5m0s for pod "var-expansion-f392fb88-ad86-4b35-833c-d1055c6ba4b0" in namespace "var-expansion-8829" to be "Succeeded or Failed"
May  6 18:56:34.025: INFO: Pod "var-expansion-f392fb88-ad86-4b35-833c-d1055c6ba4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.78731ms
May  6 18:56:36.028: INFO: Pod "var-expansion-f392fb88-ad86-4b35-833c-d1055c6ba4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020955294s
May  6 18:56:38.076: INFO: Pod "var-expansion-f392fb88-ad86-4b35-833c-d1055c6ba4b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069275768s
STEP: Saw pod success
May  6 18:56:38.077: INFO: Pod "var-expansion-f392fb88-ad86-4b35-833c-d1055c6ba4b0" satisfied condition "Succeeded or Failed"
May  6 18:56:38.080: INFO: Trying to get logs from node kali-worker pod var-expansion-f392fb88-ad86-4b35-833c-d1055c6ba4b0 container dapi-container: 
STEP: delete the pod
May  6 18:56:38.108: INFO: Waiting for pod var-expansion-f392fb88-ad86-4b35-833c-d1055c6ba4b0 to disappear
May  6 18:56:38.123: INFO: Pod var-expansion-f392fb88-ad86-4b35-833c-d1055c6ba4b0 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:56:38.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8829" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3897,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:56:38.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  6 18:56:39.085: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  6 18:56:41.094: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388199, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388199, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388199, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388199, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  6 18:56:44.173: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:56:44.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2696" for this suite.
STEP: Destroying namespace "webhook-2696-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.367 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":229,"skipped":3908,"failed":0}
S
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:56:44.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:56:44.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:56:51.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2149" for this suite.

• [SLOW TEST:6.835 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3909,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:56:51.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May  6 18:56:57.772: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fd0abcab-6a29-4229-b15c-b8571bd702cf"
May  6 18:56:57.772: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fd0abcab-6a29-4229-b15c-b8571bd702cf" in namespace "pods-4780" to be "terminated due to deadline exceeded"
May  6 18:56:57.903: INFO: Pod "pod-update-activedeadlineseconds-fd0abcab-6a29-4229-b15c-b8571bd702cf": Phase="Running", Reason="", readiness=true. Elapsed: 131.71834ms
May  6 18:57:00.366: INFO: Pod "pod-update-activedeadlineseconds-fd0abcab-6a29-4229-b15c-b8571bd702cf": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.594724991s
May  6 18:57:00.366: INFO: Pod "pod-update-activedeadlineseconds-fd0abcab-6a29-4229-b15c-b8571bd702cf" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:57:00.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4780" for this suite.

• [SLOW TEST:9.039 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":3944,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:57:00.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  6 18:57:01.582: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b07922f2-6376-49d4-8325-60d0a3e6cd9c" in namespace "projected-1383" to be "Succeeded or Failed"
May  6 18:57:01.701: INFO: Pod "downwardapi-volume-b07922f2-6376-49d4-8325-60d0a3e6cd9c": Phase="Pending", Reason="", readiness=false. Elapsed: 119.165466ms
May  6 18:57:03.705: INFO: Pod "downwardapi-volume-b07922f2-6376-49d4-8325-60d0a3e6cd9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122855569s
May  6 18:57:05.814: INFO: Pod "downwardapi-volume-b07922f2-6376-49d4-8325-60d0a3e6cd9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.231921313s
STEP: Saw pod success
May  6 18:57:05.814: INFO: Pod "downwardapi-volume-b07922f2-6376-49d4-8325-60d0a3e6cd9c" satisfied condition "Succeeded or Failed"
May  6 18:57:05.817: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-b07922f2-6376-49d4-8325-60d0a3e6cd9c container client-container: 
STEP: delete the pod
May  6 18:57:05.971: INFO: Waiting for pod downwardapi-volume-b07922f2-6376-49d4-8325-60d0a3e6cd9c to disappear
May  6 18:57:06.011: INFO: Pod downwardapi-volume-b07922f2-6376-49d4-8325-60d0a3e6cd9c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:57:06.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1383" for this suite.

• [SLOW TEST:5.763 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":3944,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:57:06.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-043c5c92-ed8f-4154-b868-91d00da4f753
STEP: Creating a pod to test consume secrets
May  6 18:57:06.320: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0a94c5e4-5091-4b52-bba2-641f7e0848c7" in namespace "projected-775" to be "Succeeded or Failed"
May  6 18:57:06.475: INFO: Pod "pod-projected-secrets-0a94c5e4-5091-4b52-bba2-641f7e0848c7": Phase="Pending", Reason="", readiness=false. Elapsed: 155.346384ms
May  6 18:57:08.975: INFO: Pod "pod-projected-secrets-0a94c5e4-5091-4b52-bba2-641f7e0848c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.655089556s
May  6 18:57:11.029: INFO: Pod "pod-projected-secrets-0a94c5e4-5091-4b52-bba2-641f7e0848c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.709412662s
May  6 18:57:13.033: INFO: Pod "pod-projected-secrets-0a94c5e4-5091-4b52-bba2-641f7e0848c7": Phase="Running", Reason="", readiness=true. Elapsed: 6.712910545s
May  6 18:57:15.038: INFO: Pod "pod-projected-secrets-0a94c5e4-5091-4b52-bba2-641f7e0848c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.71793645s
STEP: Saw pod success
May  6 18:57:15.038: INFO: Pod "pod-projected-secrets-0a94c5e4-5091-4b52-bba2-641f7e0848c7" satisfied condition "Succeeded or Failed"
May  6 18:57:15.041: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-0a94c5e4-5091-4b52-bba2-641f7e0848c7 container projected-secret-volume-test: 
STEP: delete the pod
May  6 18:57:15.096: INFO: Waiting for pod pod-projected-secrets-0a94c5e4-5091-4b52-bba2-641f7e0848c7 to disappear
May  6 18:57:15.113: INFO: Pod pod-projected-secrets-0a94c5e4-5091-4b52-bba2-641f7e0848c7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:57:15.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-775" for this suite.

• [SLOW TEST:9.080 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":3951,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:57:15.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
May  6 18:57:15.294: INFO: Waiting up to 5m0s for pod "pod-80ca4fe2-d074-4ca6-9ba0-a77ea059b5f0" in namespace "emptydir-4875" to be "Succeeded or Failed"
May  6 18:57:15.304: INFO: Pod "pod-80ca4fe2-d074-4ca6-9ba0-a77ea059b5f0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.971349ms
May  6 18:57:17.436: INFO: Pod "pod-80ca4fe2-d074-4ca6-9ba0-a77ea059b5f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142002754s
May  6 18:57:19.463: INFO: Pod "pod-80ca4fe2-d074-4ca6-9ba0-a77ea059b5f0": Phase="Running", Reason="", readiness=true. Elapsed: 4.168319641s
May  6 18:57:21.472: INFO: Pod "pod-80ca4fe2-d074-4ca6-9ba0-a77ea059b5f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.177900983s
STEP: Saw pod success
May  6 18:57:21.472: INFO: Pod "pod-80ca4fe2-d074-4ca6-9ba0-a77ea059b5f0" satisfied condition "Succeeded or Failed"
May  6 18:57:21.477: INFO: Trying to get logs from node kali-worker pod pod-80ca4fe2-d074-4ca6-9ba0-a77ea059b5f0 container test-container: 
STEP: delete the pod
May  6 18:57:21.695: INFO: Waiting for pod pod-80ca4fe2-d074-4ca6-9ba0-a77ea059b5f0 to disappear
May  6 18:57:21.828: INFO: Pod pod-80ca4fe2-d074-4ca6-9ba0-a77ea059b5f0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:57:21.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4875" for this suite.

• [SLOW TEST:6.628 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":3957,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:57:21.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:57:39.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9648" for this suite.

• [SLOW TEST:17.392 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":235,"skipped":3963,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:57:39.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May  6 18:57:39.649: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May  6 18:57:39.703: INFO: Waiting for terminating namespaces to be deleted...
May  6 18:57:39.706: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May  6 18:57:39.711: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  6 18:57:39.711: INFO: 	Container kindnet-cni ready: true, restart count 1
May  6 18:57:39.711: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  6 18:57:39.711: INFO: 	Container kube-proxy ready: true, restart count 0
May  6 18:57:39.711: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May  6 18:57:39.750: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  6 18:57:39.750: INFO: 	Container kube-proxy ready: true, restart count 0
May  6 18:57:39.750: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  6 18:57:39.750: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: verifying the node has the label node kali-worker
STEP: verifying the node has the label node kali-worker2
May  6 18:57:40.019: INFO: Pod kindnet-f8plf requesting resource cpu=100m on Node kali-worker
May  6 18:57:40.020: INFO: Pod kindnet-mcdh2 requesting resource cpu=100m on Node kali-worker2
May  6 18:57:40.020: INFO: Pod kube-proxy-mmnb6 requesting resource cpu=0m on Node kali-worker2
May  6 18:57:40.020: INFO: Pod kube-proxy-vrswj requesting resource cpu=0m on Node kali-worker
STEP: Starting Pods to consume most of the cluster CPU.
May  6 18:57:40.020: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker
May  6 18:57:40.054: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2871daf4-a307-48da-821c-1fc08be6e5d1.160c856a4c542a2d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-876/filler-pod-2871daf4-a307-48da-821c-1fc08be6e5d1 to kali-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2871daf4-a307-48da-821c-1fc08be6e5d1.160c856ab50fd8cb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2871daf4-a307-48da-821c-1fc08be6e5d1.160c856b7913b1ea], Reason = [Created], Message = [Created container filler-pod-2871daf4-a307-48da-821c-1fc08be6e5d1]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2871daf4-a307-48da-821c-1fc08be6e5d1.160c856b8a526da7], Reason = [Started], Message = [Started container filler-pod-2871daf4-a307-48da-821c-1fc08be6e5d1]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-97604234-60ad-4de3-92a1-4c04059aecda.160c856a4b51c319], Reason = [Scheduled], Message = [Successfully assigned sched-pred-876/filler-pod-97604234-60ad-4de3-92a1-4c04059aecda to kali-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-97604234-60ad-4de3-92a1-4c04059aecda.160c856a9f2aa346], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-97604234-60ad-4de3-92a1-4c04059aecda.160c856b2df1c961], Reason = [Created], Message = [Created container filler-pod-97604234-60ad-4de3-92a1-4c04059aecda]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-97604234-60ad-4de3-92a1-4c04059aecda.160c856b57599886], Reason = [Started], Message = [Started container filler-pod-97604234-60ad-4de3-92a1-4c04059aecda]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.160c856c2c6acf36], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node kali-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node kali-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:57:49.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-876" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:10.079 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":275,"completed":236,"skipped":3983,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:57:49.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
May  6 18:57:49.427: INFO: Waiting up to 5m0s for pod "pod-459d29b0-bf0a-4577-a962-4a4c3d5968cc" in namespace "emptydir-6405" to be "Succeeded or Failed"
May  6 18:57:49.451: INFO: Pod "pod-459d29b0-bf0a-4577-a962-4a4c3d5968cc": Phase="Pending", Reason="", readiness=false. Elapsed: 24.00534ms
May  6 18:57:51.556: INFO: Pod "pod-459d29b0-bf0a-4577-a962-4a4c3d5968cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129017825s
May  6 18:57:53.560: INFO: Pod "pod-459d29b0-bf0a-4577-a962-4a4c3d5968cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133456591s
STEP: Saw pod success
May  6 18:57:53.560: INFO: Pod "pod-459d29b0-bf0a-4577-a962-4a4c3d5968cc" satisfied condition "Succeeded or Failed"
May  6 18:57:53.564: INFO: Trying to get logs from node kali-worker2 pod pod-459d29b0-bf0a-4577-a962-4a4c3d5968cc container test-container: 
STEP: delete the pod
May  6 18:57:53.778: INFO: Waiting for pod pod-459d29b0-bf0a-4577-a962-4a4c3d5968cc to disappear
May  6 18:57:53.781: INFO: Pod pod-459d29b0-bf0a-4577-a962-4a4c3d5968cc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:57:53.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6405" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":3986,"failed":0}

------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:57:53.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:57:54.097: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-5fe320f3-3488-4c2c-ba74-61a7cb794892" in namespace "security-context-test-5911" to be "Succeeded or Failed"
May  6 18:57:54.197: INFO: Pod "busybox-readonly-false-5fe320f3-3488-4c2c-ba74-61a7cb794892": Phase="Pending", Reason="", readiness=false. Elapsed: 100.585971ms
May  6 18:57:56.371: INFO: Pod "busybox-readonly-false-5fe320f3-3488-4c2c-ba74-61a7cb794892": Phase="Pending", Reason="", readiness=false. Elapsed: 2.274026763s
May  6 18:57:58.497: INFO: Pod "busybox-readonly-false-5fe320f3-3488-4c2c-ba74-61a7cb794892": Phase="Running", Reason="", readiness=true. Elapsed: 4.400221617s
May  6 18:58:00.501: INFO: Pod "busybox-readonly-false-5fe320f3-3488-4c2c-ba74-61a7cb794892": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.40436598s
May  6 18:58:00.501: INFO: Pod "busybox-readonly-false-5fe320f3-3488-4c2c-ba74-61a7cb794892" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:58:00.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5911" for this suite.

• [SLOW TEST:6.719 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":3986,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:58:00.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
May  6 18:58:00.716: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7427 /api/v1/namespaces/watch-7427/configmaps/e2e-watch-test-watch-closed c6131133-a6f1-425d-adf6-536a77b9f360 2073991 0 2020-05-06 18:58:00 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-06 18:58:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May  6 18:58:00.716: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7427 /api/v1/namespaces/watch-7427/configmaps/e2e-watch-test-watch-closed c6131133-a6f1-425d-adf6-536a77b9f360 2073992 0 2020-05-06 18:58:00 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-06 18:58:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
May  6 18:58:00.772: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7427 /api/v1/namespaces/watch-7427/configmaps/e2e-watch-test-watch-closed c6131133-a6f1-425d-adf6-536a77b9f360 2073993 0 2020-05-06 18:58:00 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-06 18:58:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May  6 18:58:00.772: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7427 /api/v1/namespaces/watch-7427/configmaps/e2e-watch-test-watch-closed c6131133-a6f1-425d-adf6-536a77b9f360 2073994 0 2020-05-06 18:58:00 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-06 18:58:00 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:58:00.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7427" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":239,"skipped":4015,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:58:00.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:58:01.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
May  6 18:58:03.209: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T18:58:02Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-06T18:58:02Z]] name:name1 resourceVersion:2074009 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:4a102398-cd3b-467c-aeb4-13a0d120a82b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
May  6 18:58:13.217: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T18:58:13Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-06T18:58:13Z]] name:name2 resourceVersion:2074054 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:597fb205-57e4-45a0-af4b-958825b3fa71] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
May  6 18:58:23.254: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T18:58:02Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-06T18:58:23Z]] name:name1 resourceVersion:2074083 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:4a102398-cd3b-467c-aeb4-13a0d120a82b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
May  6 18:58:33.262: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T18:58:13Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-06T18:58:33Z]] name:name2 resourceVersion:2074111 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:597fb205-57e4-45a0-af4b-958825b3fa71] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
May  6 18:58:43.360: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T18:58:02Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-06T18:58:23Z]] name:name1 resourceVersion:2074141 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:4a102398-cd3b-467c-aeb4-13a0d120a82b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
May  6 18:58:53.370: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-06T18:58:13Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-06T18:58:33Z]] name:name2 resourceVersion:2074171 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:597fb205-57e4-45a0-af4b-958825b3fa71] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:59:03.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-8698" for this suite.

• [SLOW TEST:62.932 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":240,"skipped":4057,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:59:03.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
May  6 18:59:04.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config api-versions'
May  6 18:59:04.295: INFO: stderr: ""
May  6 18:59:04.295: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:59:04.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2496" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":241,"skipped":4079,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:59:04.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
May  6 18:59:04.438: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
May  6 18:59:05.331: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
May  6 18:59:08.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388345, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388345, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388345, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388345, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:59:10.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388345, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388345, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388345, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388345, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 18:59:12.768: INFO: Waited 525.37628ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:59:13.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5736" for this suite.

• [SLOW TEST:9.019 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":242,"skipped":4112,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:59:13.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 18:59:13.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May  6 18:59:16.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8676 create -f -'
May  6 18:59:20.279: INFO: stderr: ""
May  6 18:59:20.279: INFO: stdout: "e2e-test-crd-publish-openapi-2049-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
May  6 18:59:20.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8676 delete e2e-test-crd-publish-openapi-2049-crds test-cr'
May  6 18:59:20.531: INFO: stderr: ""
May  6 18:59:20.531: INFO: stdout: "e2e-test-crd-publish-openapi-2049-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
May  6 18:59:20.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8676 apply -f -'
May  6 18:59:20.842: INFO: stderr: ""
May  6 18:59:20.842: INFO: stdout: "e2e-test-crd-publish-openapi-2049-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
May  6 18:59:20.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8676 delete e2e-test-crd-publish-openapi-2049-crds test-cr'
May  6 18:59:20.947: INFO: stderr: ""
May  6 18:59:20.947: INFO: stdout: "e2e-test-crd-publish-openapi-2049-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
May  6 18:59:20.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2049-crds'
May  6 18:59:24.372: INFO: stderr: ""
May  6 18:59:24.372: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2049-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:59:27.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8676" for this suite.

• [SLOW TEST:13.959 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":243,"skipped":4121,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:59:27.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May  6 18:59:31.938: INFO: Successfully updated pod "annotationupdate6afea0a6-f35e-448c-8c1d-c30a1688890d"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:59:36.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9078" for this suite.

• [SLOW TEST:8.818 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":244,"skipped":4126,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:59:36.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  6 18:59:36.326: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f114d5e-fcc2-4dfb-b8ff-f10d27b2da6e" in namespace "projected-2709" to be "Succeeded or Failed"
May  6 18:59:36.367: INFO: Pod "downwardapi-volume-3f114d5e-fcc2-4dfb-b8ff-f10d27b2da6e": Phase="Pending", Reason="", readiness=false. Elapsed: 40.466143ms
May  6 18:59:38.660: INFO: Pod "downwardapi-volume-3f114d5e-fcc2-4dfb-b8ff-f10d27b2da6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.333601121s
May  6 18:59:40.665: INFO: Pod "downwardapi-volume-3f114d5e-fcc2-4dfb-b8ff-f10d27b2da6e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.339014017s
May  6 18:59:42.670: INFO: Pod "downwardapi-volume-3f114d5e-fcc2-4dfb-b8ff-f10d27b2da6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.343650001s
STEP: Saw pod success
May  6 18:59:42.670: INFO: Pod "downwardapi-volume-3f114d5e-fcc2-4dfb-b8ff-f10d27b2da6e" satisfied condition "Succeeded or Failed"
May  6 18:59:42.673: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-3f114d5e-fcc2-4dfb-b8ff-f10d27b2da6e container client-container: 
STEP: delete the pod
May  6 18:59:42.871: INFO: Waiting for pod downwardapi-volume-3f114d5e-fcc2-4dfb-b8ff-f10d27b2da6e to disappear
May  6 18:59:42.953: INFO: Pod downwardapi-volume-3f114d5e-fcc2-4dfb-b8ff-f10d27b2da6e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:59:42.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2709" for this suite.

• [SLOW TEST:6.937 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":245,"skipped":4127,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:59:43.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  6 18:59:43.399: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bfee0644-eba9-4627-85e5-70faa57c1362" in namespace "projected-4122" to be "Succeeded or Failed"
May  6 18:59:43.410: INFO: Pod "downwardapi-volume-bfee0644-eba9-4627-85e5-70faa57c1362": Phase="Pending", Reason="", readiness=false. Elapsed: 11.591788ms
May  6 18:59:45.522: INFO: Pod "downwardapi-volume-bfee0644-eba9-4627-85e5-70faa57c1362": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123117597s
May  6 18:59:47.792: INFO: Pod "downwardapi-volume-bfee0644-eba9-4627-85e5-70faa57c1362": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.392909287s
STEP: Saw pod success
May  6 18:59:47.792: INFO: Pod "downwardapi-volume-bfee0644-eba9-4627-85e5-70faa57c1362" satisfied condition "Succeeded or Failed"
May  6 18:59:47.794: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-bfee0644-eba9-4627-85e5-70faa57c1362 container client-container: 
STEP: delete the pod
May  6 18:59:47.986: INFO: Waiting for pod downwardapi-volume-bfee0644-eba9-4627-85e5-70faa57c1362 to disappear
May  6 18:59:48.108: INFO: Pod downwardapi-volume-bfee0644-eba9-4627-85e5-70faa57c1362 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:59:48.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4122" for this suite.

• [SLOW TEST:5.122 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4130,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:59:48.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching services
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:59:48.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5006" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":247,"skipped":4146,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:59:48.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-61f4fab6-03a9-4336-94f7-b758e97404e9
STEP: Creating a pod to test consume secrets
May  6 18:59:48.445: INFO: Waiting up to 5m0s for pod "pod-secrets-ee54056a-d063-4324-8b85-65bcd727ad3f" in namespace "secrets-1726" to be "Succeeded or Failed"
May  6 18:59:48.515: INFO: Pod "pod-secrets-ee54056a-d063-4324-8b85-65bcd727ad3f": Phase="Pending", Reason="", readiness=false. Elapsed: 70.035108ms
May  6 18:59:50.576: INFO: Pod "pod-secrets-ee54056a-d063-4324-8b85-65bcd727ad3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130379686s
May  6 18:59:52.579: INFO: Pod "pod-secrets-ee54056a-d063-4324-8b85-65bcd727ad3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.13404528s
STEP: Saw pod success
May  6 18:59:52.579: INFO: Pod "pod-secrets-ee54056a-d063-4324-8b85-65bcd727ad3f" satisfied condition "Succeeded or Failed"
May  6 18:59:52.582: INFO: Trying to get logs from node kali-worker pod pod-secrets-ee54056a-d063-4324-8b85-65bcd727ad3f container secret-volume-test: 
STEP: delete the pod
May  6 18:59:52.613: INFO: Waiting for pod pod-secrets-ee54056a-d063-4324-8b85-65bcd727ad3f to disappear
May  6 18:59:52.629: INFO: Pod pod-secrets-ee54056a-d063-4324-8b85-65bcd727ad3f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 18:59:52.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1726" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4147,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 18:59:52.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
May  6 18:59:53.025: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:59:53.054: INFO: Number of nodes with available pods: 0
May  6 18:59:53.054: INFO: Node kali-worker is running more than one daemon pod
May  6 18:59:54.059: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:59:54.063: INFO: Number of nodes with available pods: 0
May  6 18:59:54.063: INFO: Node kali-worker is running more than one daemon pod
May  6 18:59:55.059: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:59:55.061: INFO: Number of nodes with available pods: 0
May  6 18:59:55.061: INFO: Node kali-worker is running more than one daemon pod
May  6 18:59:56.101: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:59:56.104: INFO: Number of nodes with available pods: 0
May  6 18:59:56.104: INFO: Node kali-worker is running more than one daemon pod
May  6 18:59:57.059: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:59:57.063: INFO: Number of nodes with available pods: 1
May  6 18:59:57.063: INFO: Node kali-worker is running more than one daemon pod
May  6 18:59:58.095: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:59:58.145: INFO: Number of nodes with available pods: 2
May  6 18:59:58.145: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
May  6 18:59:58.301: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:59:58.310: INFO: Number of nodes with available pods: 1
May  6 18:59:58.310: INFO: Node kali-worker2 is running more than one daemon pod
May  6 18:59:59.314: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 18:59:59.317: INFO: Number of nodes with available pods: 1
May  6 18:59:59.317: INFO: Node kali-worker2 is running more than one daemon pod
May  6 19:00:00.315: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 19:00:00.319: INFO: Number of nodes with available pods: 1
May  6 19:00:00.319: INFO: Node kali-worker2 is running more than one daemon pod
May  6 19:00:01.315: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 19:00:01.319: INFO: Number of nodes with available pods: 1
May  6 19:00:01.319: INFO: Node kali-worker2 is running more than one daemon pod
May  6 19:00:02.315: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  6 19:00:02.319: INFO: Number of nodes with available pods: 2
May  6 19:00:02.319: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6091, will wait for the garbage collector to delete the pods
May  6 19:00:02.385: INFO: Deleting DaemonSet.extensions daemon-set took: 6.962199ms
May  6 19:00:02.685: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.278861ms
May  6 19:00:13.796: INFO: Number of nodes with available pods: 0
May  6 19:00:13.796: INFO: Number of running nodes: 0, number of available pods: 0
May  6 19:00:13.800: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6091/daemonsets","resourceVersion":"2074677"},"items":null}

May  6 19:00:13.802: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6091/pods","resourceVersion":"2074677"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:00:13.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6091" for this suite.

• [SLOW TEST:20.998 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":249,"skipped":4202,"failed":0}
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:00:13.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May  6 19:00:13.916: INFO: Waiting up to 5m0s for pod "downward-api-443f06a6-1ee0-48e4-90b4-fac30c041a34" in namespace "downward-api-2356" to be "Succeeded or Failed"
May  6 19:00:13.918: INFO: Pod "downward-api-443f06a6-1ee0-48e4-90b4-fac30c041a34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181524ms
May  6 19:00:15.975: INFO: Pod "downward-api-443f06a6-1ee0-48e4-90b4-fac30c041a34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058810765s
May  6 19:00:17.979: INFO: Pod "downward-api-443f06a6-1ee0-48e4-90b4-fac30c041a34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063152632s
STEP: Saw pod success
May  6 19:00:17.979: INFO: Pod "downward-api-443f06a6-1ee0-48e4-90b4-fac30c041a34" satisfied condition "Succeeded or Failed"
May  6 19:00:17.982: INFO: Trying to get logs from node kali-worker2 pod downward-api-443f06a6-1ee0-48e4-90b4-fac30c041a34 container dapi-container: 
STEP: delete the pod
May  6 19:00:18.019: INFO: Waiting for pod downward-api-443f06a6-1ee0-48e4-90b4-fac30c041a34 to disappear
May  6 19:00:18.034: INFO: Pod downward-api-443f06a6-1ee0-48e4-90b4-fac30c041a34 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:00:18.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2356" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4209,"failed":0}
SSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:00:18.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-7b5de5d7-fc69-4204-ad62-ee35fd783713
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:00:18.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-763" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":251,"skipped":4212,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:00:18.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
May  6 19:00:18.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-221'
May  6 19:00:18.406: INFO: stderr: ""
May  6 19:00:18.406: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423
May  6 19:00:18.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-221'
May  6 19:00:22.685: INFO: stderr: ""
May  6 19:00:22.686: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:00:22.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-221" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":275,"completed":252,"skipped":4231,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:00:22.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-projected-all-test-volume-74fd7af3-d3ff-4e91-90aa-5c2a9dedc0b5
STEP: Creating secret with name secret-projected-all-test-volume-b531e624-30c4-4b0c-b934-8a40ea4dc82e
STEP: Creating a pod to test Check all projections for projected volume plugin
May  6 19:00:22.916: INFO: Waiting up to 5m0s for pod "projected-volume-1d9a50c2-140b-4191-8fe4-8217c934611a" in namespace "projected-3882" to be "Succeeded or Failed"
May  6 19:00:22.983: INFO: Pod "projected-volume-1d9a50c2-140b-4191-8fe4-8217c934611a": Phase="Pending", Reason="", readiness=false. Elapsed: 67.186667ms
May  6 19:00:24.987: INFO: Pod "projected-volume-1d9a50c2-140b-4191-8fe4-8217c934611a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071704077s
May  6 19:00:26.992: INFO: Pod "projected-volume-1d9a50c2-140b-4191-8fe4-8217c934611a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075934894s
STEP: Saw pod success
May  6 19:00:26.992: INFO: Pod "projected-volume-1d9a50c2-140b-4191-8fe4-8217c934611a" satisfied condition "Succeeded or Failed"
May  6 19:00:26.996: INFO: Trying to get logs from node kali-worker2 pod projected-volume-1d9a50c2-140b-4191-8fe4-8217c934611a container projected-all-volume-test: 
STEP: delete the pod
May  6 19:00:27.047: INFO: Waiting for pod projected-volume-1d9a50c2-140b-4191-8fe4-8217c934611a to disappear
May  6 19:00:27.058: INFO: Pod projected-volume-1d9a50c2-140b-4191-8fe4-8217c934611a no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:00:27.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3882" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4270,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:00:27.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May  6 19:00:27.150: INFO: Waiting up to 5m0s for pod "downward-api-81b82004-5b62-4dea-bbd8-c9cd450cbdfe" in namespace "downward-api-9609" to be "Succeeded or Failed"
May  6 19:00:27.164: INFO: Pod "downward-api-81b82004-5b62-4dea-bbd8-c9cd450cbdfe": Phase="Pending", Reason="", readiness=false. Elapsed: 13.759807ms
May  6 19:00:29.168: INFO: Pod "downward-api-81b82004-5b62-4dea-bbd8-c9cd450cbdfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018012133s
May  6 19:00:31.173: INFO: Pod "downward-api-81b82004-5b62-4dea-bbd8-c9cd450cbdfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022958565s
STEP: Saw pod success
May  6 19:00:31.173: INFO: Pod "downward-api-81b82004-5b62-4dea-bbd8-c9cd450cbdfe" satisfied condition "Succeeded or Failed"
May  6 19:00:31.176: INFO: Trying to get logs from node kali-worker2 pod downward-api-81b82004-5b62-4dea-bbd8-c9cd450cbdfe container dapi-container: 
STEP: delete the pod
May  6 19:00:31.215: INFO: Waiting for pod downward-api-81b82004-5b62-4dea-bbd8-c9cd450cbdfe to disappear
May  6 19:00:31.243: INFO: Pod downward-api-81b82004-5b62-4dea-bbd8-c9cd450cbdfe no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:00:31.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9609" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4272,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:00:31.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
May  6 19:00:36.914: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:00:37.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1826" for this suite.

• [SLOW TEST:6.993 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":255,"skipped":4284,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:00:38.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May  6 19:00:38.733: INFO: PodSpec: initContainers in spec.initContainers
May  6 19:01:30.774: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-25f6eaa5-d421-432d-b9a0-736caa658673", GenerateName:"", Namespace:"init-container-1409", SelfLink:"/api/v1/namespaces/init-container-1409/pods/pod-init-25f6eaa5-d421-432d-b9a0-736caa658673", UID:"f68b4370-119a-4aff-bf3d-e529e3a5497a", ResourceVersion:"2075094", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724388439, loc:(*time.Location)(0x7b200c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"733467506"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002ba44a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002ba44c0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002ba44e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002ba4500)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2tlrr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005187cc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2tlrr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2tlrr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2tlrr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0041fc1c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ab5a40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0041fc250)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0041fc270)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0041fc278), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0041fc27c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388440, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388440, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388440, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388439, loc:(*time.Location)(0x7b200c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.18", PodIP:"10.244.1.175", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.175"}}, StartTime:(*v1.Time)(0xc002ba4520), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002ba4560), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001ab5b20)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://ae1a91aec7296c58dcb21849c1c4313b590397ea6389ef6890f6bf1897628fd9", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ba4580), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ba4540), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0041fc2ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:01:30.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1409" for this suite.

• [SLOW TEST:52.614 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":256,"skipped":4311,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:01:30.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-downwardapi-bswv
STEP: Creating a pod to test atomic-volume-subpath
May  6 19:01:31.086: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bswv" in namespace "subpath-1173" to be "Succeeded or Failed"
May  6 19:01:31.162: INFO: Pod "pod-subpath-test-downwardapi-bswv": Phase="Pending", Reason="", readiness=false. Elapsed: 76.234122ms
May  6 19:01:33.167: INFO: Pod "pod-subpath-test-downwardapi-bswv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080779586s
May  6 19:01:35.171: INFO: Pod "pod-subpath-test-downwardapi-bswv": Phase="Running", Reason="", readiness=true. Elapsed: 4.085062014s
May  6 19:01:37.175: INFO: Pod "pod-subpath-test-downwardapi-bswv": Phase="Running", Reason="", readiness=true. Elapsed: 6.088776751s
May  6 19:01:39.179: INFO: Pod "pod-subpath-test-downwardapi-bswv": Phase="Running", Reason="", readiness=true. Elapsed: 8.093159622s
May  6 19:01:41.198: INFO: Pod "pod-subpath-test-downwardapi-bswv": Phase="Running", Reason="", readiness=true. Elapsed: 10.112561903s
May  6 19:01:43.202: INFO: Pod "pod-subpath-test-downwardapi-bswv": Phase="Running", Reason="", readiness=true. Elapsed: 12.116536028s
May  6 19:01:45.207: INFO: Pod "pod-subpath-test-downwardapi-bswv": Phase="Running", Reason="", readiness=true. Elapsed: 14.120831434s
May  6 19:01:47.210: INFO: Pod "pod-subpath-test-downwardapi-bswv": Phase="Running", Reason="", readiness=true. Elapsed: 16.124101422s
May  6 19:01:49.220: INFO: Pod "pod-subpath-test-downwardapi-bswv": Phase="Running", Reason="", readiness=true. Elapsed: 18.133786291s
May  6 19:01:51.319: INFO: Pod "pod-subpath-test-downwardapi-bswv": Phase="Running", Reason="", readiness=true. Elapsed: 20.23299586s
May  6 19:01:53.322: INFO: Pod "pod-subpath-test-downwardapi-bswv": Phase="Running", Reason="", readiness=true. Elapsed: 22.236692366s
May  6 19:01:55.327: INFO: Pod "pod-subpath-test-downwardapi-bswv": Phase="Running", Reason="", readiness=true. Elapsed: 24.24086693s
May  6 19:01:57.332: INFO: Pod "pod-subpath-test-downwardapi-bswv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.246006474s
STEP: Saw pod success
May  6 19:01:57.332: INFO: Pod "pod-subpath-test-downwardapi-bswv" satisfied condition "Succeeded or Failed"
May  6 19:01:57.335: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-bswv container test-container-subpath-downwardapi-bswv: 
STEP: delete the pod
May  6 19:01:57.393: INFO: Waiting for pod pod-subpath-test-downwardapi-bswv to disappear
May  6 19:01:57.401: INFO: Pod pod-subpath-test-downwardapi-bswv no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-bswv
May  6 19:01:57.401: INFO: Deleting pod "pod-subpath-test-downwardapi-bswv" in namespace "subpath-1173"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:01:57.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1173" for this suite.

• [SLOW TEST:26.564 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":257,"skipped":4355,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:01:57.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  6 19:01:58.047: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  6 19:02:00.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388518, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388518, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388518, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388518, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  6 19:02:03.243: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:02:13.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3531" for this suite.
STEP: Destroying namespace "webhook-3531-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.145 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":258,"skipped":4400,"failed":0}
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:02:13.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0506 19:02:25.694258       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May  6 19:02:25.694: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:02:25.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7381" for this suite.

• [SLOW TEST:12.411 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":259,"skipped":4400,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:02:25.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 19:02:26.199: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:02:27.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4438" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":275,"completed":260,"skipped":4416,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:02:27.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
May  6 19:02:27.527: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
May  6 19:02:27.532: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
May  6 19:02:27.532: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
May  6 19:02:27.557: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
May  6 19:02:27.557: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
May  6 19:02:27.612: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
May  6 19:02:27.612: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
May  6 19:02:35.093: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:02:35.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-9229" for this suite.

• [SLOW TEST:7.810 seconds]
[sig-scheduling] LimitRange
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":261,"skipped":4429,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:02:35.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-636cd6ca-a1a6-4563-a871-4b02f94cb87c in namespace container-probe-9319
May  6 19:02:39.592: INFO: Started pod busybox-636cd6ca-a1a6-4563-a871-4b02f94cb87c in namespace container-probe-9319
STEP: checking the pod's current state and verifying that restartCount is present
May  6 19:02:39.596: INFO: Initial restart count of pod busybox-636cd6ca-a1a6-4563-a871-4b02f94cb87c is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:06:40.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9319" for this suite.

• [SLOW TEST:245.652 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4449,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:06:40.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:06:41.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5114" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":263,"skipped":4490,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:06:41.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  6 19:06:41.678: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  6 19:06:43.963: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388801, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388801, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388801, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388801, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  6 19:06:47.051: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:06:47.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9583" for this suite.
STEP: Destroying namespace "webhook-9583-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.113 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":264,"skipped":4498,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:06:48.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7523.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7523.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7523.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7523.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7523.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7523.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7523.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7523.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7523.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7523.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7523.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 73.187.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.187.73_udp@PTR;check="$$(dig +tcp +noall +answer +search 73.187.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.187.73_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7523.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7523.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7523.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7523.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7523.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7523.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7523.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7523.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7523.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7523.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7523.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 73.187.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.187.73_udp@PTR;check="$$(dig +tcp +noall +answer +search 73.187.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.187.73_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  6 19:06:57.002: INFO: Unable to read wheezy_udp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:06:57.005: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:06:57.007: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:06:57.009: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:06:57.029: INFO: Unable to read jessie_udp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:06:57.031: INFO: Unable to read jessie_tcp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:06:57.034: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:06:57.037: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:06:57.056: INFO: Lookups using dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89 failed for: [wheezy_udp@dns-test-service.dns-7523.svc.cluster.local wheezy_tcp@dns-test-service.dns-7523.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local jessie_udp@dns-test-service.dns-7523.svc.cluster.local jessie_tcp@dns-test-service.dns-7523.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local]

May  6 19:07:02.060: INFO: Unable to read wheezy_udp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:02.063: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:02.066: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:02.068: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:02.087: INFO: Unable to read jessie_udp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:02.089: INFO: Unable to read jessie_tcp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:02.091: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:02.094: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:02.677: INFO: Lookups using dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89 failed for: [wheezy_udp@dns-test-service.dns-7523.svc.cluster.local wheezy_tcp@dns-test-service.dns-7523.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local jessie_udp@dns-test-service.dns-7523.svc.cluster.local jessie_tcp@dns-test-service.dns-7523.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local]

May  6 19:07:07.060: INFO: Unable to read wheezy_udp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:07.063: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:07.067: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:07.071: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:07.091: INFO: Unable to read jessie_udp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:07.094: INFO: Unable to read jessie_tcp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:07.097: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:07.100: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:07.124: INFO: Lookups using dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89 failed for: [wheezy_udp@dns-test-service.dns-7523.svc.cluster.local wheezy_tcp@dns-test-service.dns-7523.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local jessie_udp@dns-test-service.dns-7523.svc.cluster.local jessie_tcp@dns-test-service.dns-7523.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local]

May  6 19:07:12.353: INFO: Unable to read wheezy_udp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:12.428: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:12.434: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:12.437: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:12.655: INFO: Unable to read jessie_udp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:12.657: INFO: Unable to read jessie_tcp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:12.660: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:12.662: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:12.752: INFO: Lookups using dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89 failed for: [wheezy_udp@dns-test-service.dns-7523.svc.cluster.local wheezy_tcp@dns-test-service.dns-7523.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local jessie_udp@dns-test-service.dns-7523.svc.cluster.local jessie_tcp@dns-test-service.dns-7523.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local]

May  6 19:07:17.104: INFO: Unable to read wheezy_udp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:17.108: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:17.250: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:17.254: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:17.275: INFO: Unable to read jessie_udp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:17.278: INFO: Unable to read jessie_tcp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:17.281: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:17.284: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:17.307: INFO: Lookups using dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89 failed for: [wheezy_udp@dns-test-service.dns-7523.svc.cluster.local wheezy_tcp@dns-test-service.dns-7523.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local jessie_udp@dns-test-service.dns-7523.svc.cluster.local jessie_tcp@dns-test-service.dns-7523.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local]

May  6 19:07:22.061: INFO: Unable to read wheezy_udp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:22.065: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:22.068: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:22.071: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:22.242: INFO: Unable to read jessie_udp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:22.245: INFO: Unable to read jessie_tcp@dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:22.247: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:22.250: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local from pod dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89: the server could not find the requested resource (get pods dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89)
May  6 19:07:22.273: INFO: Lookups using dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89 failed for: [wheezy_udp@dns-test-service.dns-7523.svc.cluster.local wheezy_tcp@dns-test-service.dns-7523.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local jessie_udp@dns-test-service.dns-7523.svc.cluster.local jessie_tcp@dns-test-service.dns-7523.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7523.svc.cluster.local]

May  6 19:07:27.128: INFO: DNS probes using dns-7523/dns-test-2cc030d6-650c-456c-9b30-6b363a45fe89 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:07:27.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7523" for this suite.

• [SLOW TEST:39.734 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":275,"completed":265,"skipped":4503,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:07:27.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  6 19:07:28.028: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f48b963e-82b4-4e9c-9069-fbb604adcb33" in namespace "projected-7654" to be "Succeeded or Failed"
May  6 19:07:28.044: INFO: Pod "downwardapi-volume-f48b963e-82b4-4e9c-9069-fbb604adcb33": Phase="Pending", Reason="", readiness=false. Elapsed: 16.274115ms
May  6 19:07:30.087: INFO: Pod "downwardapi-volume-f48b963e-82b4-4e9c-9069-fbb604adcb33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059202751s
May  6 19:07:32.092: INFO: Pod "downwardapi-volume-f48b963e-82b4-4e9c-9069-fbb604adcb33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063776132s
STEP: Saw pod success
May  6 19:07:32.092: INFO: Pod "downwardapi-volume-f48b963e-82b4-4e9c-9069-fbb604adcb33" satisfied condition "Succeeded or Failed"
May  6 19:07:32.094: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-f48b963e-82b4-4e9c-9069-fbb604adcb33 container client-container: 
STEP: delete the pod
May  6 19:07:32.152: INFO: Waiting for pod downwardapi-volume-f48b963e-82b4-4e9c-9069-fbb604adcb33 to disappear
May  6 19:07:32.163: INFO: Pod downwardapi-volume-f48b963e-82b4-4e9c-9069-fbb604adcb33 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:07:32.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7654" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4542,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:07:32.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
May  6 19:07:32.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:07:47.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5720" for this suite.

• [SLOW TEST:15.815 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":267,"skipped":4544,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:07:47.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-740/configmap-test-72ce13cc-116f-4ca4-8aec-37e014c334d5
STEP: Creating a pod to test consume configMaps
May  6 19:07:48.519: INFO: Waiting up to 5m0s for pod "pod-configmaps-b05b2718-7494-4ee3-9839-e4964498eaed" in namespace "configmap-740" to be "Succeeded or Failed"
May  6 19:07:48.523: INFO: Pod "pod-configmaps-b05b2718-7494-4ee3-9839-e4964498eaed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.451985ms
May  6 19:07:50.662: INFO: Pod "pod-configmaps-b05b2718-7494-4ee3-9839-e4964498eaed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143301715s
May  6 19:07:52.667: INFO: Pod "pod-configmaps-b05b2718-7494-4ee3-9839-e4964498eaed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1475947s
STEP: Saw pod success
May  6 19:07:52.667: INFO: Pod "pod-configmaps-b05b2718-7494-4ee3-9839-e4964498eaed" satisfied condition "Succeeded or Failed"
May  6 19:07:52.669: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-b05b2718-7494-4ee3-9839-e4964498eaed container env-test: 
STEP: delete the pod
May  6 19:07:52.833: INFO: Waiting for pod pod-configmaps-b05b2718-7494-4ee3-9839-e4964498eaed to disappear
May  6 19:07:52.944: INFO: Pod pod-configmaps-b05b2718-7494-4ee3-9839-e4964498eaed no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:07:52.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-740" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4560,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:07:52.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
May  6 19:07:53.090: INFO: Waiting up to 5m0s for pod "pod-d66a043b-82eb-41ca-b44d-9512fa32cd1a" in namespace "emptydir-1081" to be "Succeeded or Failed"
May  6 19:07:53.116: INFO: Pod "pod-d66a043b-82eb-41ca-b44d-9512fa32cd1a": Phase="Pending", Reason="", readiness=false. Elapsed: 25.959017ms
May  6 19:07:55.219: INFO: Pod "pod-d66a043b-82eb-41ca-b44d-9512fa32cd1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129177901s
May  6 19:07:57.223: INFO: Pod "pod-d66a043b-82eb-41ca-b44d-9512fa32cd1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132883755s
STEP: Saw pod success
May  6 19:07:57.223: INFO: Pod "pod-d66a043b-82eb-41ca-b44d-9512fa32cd1a" satisfied condition "Succeeded or Failed"
May  6 19:07:57.236: INFO: Trying to get logs from node kali-worker pod pod-d66a043b-82eb-41ca-b44d-9512fa32cd1a container test-container: 
STEP: delete the pod
May  6 19:07:57.310: INFO: Waiting for pod pod-d66a043b-82eb-41ca-b44d-9512fa32cd1a to disappear
May  6 19:07:57.356: INFO: Pod pod-d66a043b-82eb-41ca-b44d-9512fa32cd1a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:07:57.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1081" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4597,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:07:57.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  6 19:07:58.549: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  6 19:08:00.699: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388878, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388878, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388878, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388878, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 19:08:02.722: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388878, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388878, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388878, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388878, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  6 19:08:05.813: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 19:08:05.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:08:07.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7304" for this suite.
STEP: Destroying namespace "webhook-7304-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.966 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":270,"skipped":4604,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:08:07.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
May  6 19:08:21.858: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9852 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 19:08:21.858: INFO: >>> kubeConfig: /root/.kube/config
I0506 19:08:21.890501       7 log.go:172] (0xc005780370) (0xc0027d1e00) Create stream
I0506 19:08:21.890530       7 log.go:172] (0xc005780370) (0xc0027d1e00) Stream added, broadcasting: 1
I0506 19:08:21.892754       7 log.go:172] (0xc005780370) Reply frame received for 1
I0506 19:08:21.892783       7 log.go:172] (0xc005780370) (0xc002389b80) Create stream
I0506 19:08:21.892791       7 log.go:172] (0xc005780370) (0xc002389b80) Stream added, broadcasting: 3
I0506 19:08:21.893917       7 log.go:172] (0xc005780370) Reply frame received for 3
I0506 19:08:21.893951       7 log.go:172] (0xc005780370) (0xc002389c20) Create stream
I0506 19:08:21.893964       7 log.go:172] (0xc005780370) (0xc002389c20) Stream added, broadcasting: 5
I0506 19:08:21.894852       7 log.go:172] (0xc005780370) Reply frame received for 5
I0506 19:08:21.974138       7 log.go:172] (0xc005780370) Data frame received for 5
I0506 19:08:21.974184       7 log.go:172] (0xc002389c20) (5) Data frame handling
I0506 19:08:21.974216       7 log.go:172] (0xc005780370) Data frame received for 3
I0506 19:08:21.974228       7 log.go:172] (0xc002389b80) (3) Data frame handling
I0506 19:08:21.974244       7 log.go:172] (0xc002389b80) (3) Data frame sent
I0506 19:08:21.974255       7 log.go:172] (0xc005780370) Data frame received for 3
I0506 19:08:21.974264       7 log.go:172] (0xc002389b80) (3) Data frame handling
I0506 19:08:21.976239       7 log.go:172] (0xc005780370) Data frame received for 1
I0506 19:08:21.976273       7 log.go:172] (0xc0027d1e00) (1) Data frame handling
I0506 19:08:21.976319       7 log.go:172] (0xc0027d1e00) (1) Data frame sent
I0506 19:08:21.976348       7 log.go:172] (0xc005780370) (0xc0027d1e00) Stream removed, broadcasting: 1
I0506 19:08:21.976469       7 log.go:172] (0xc005780370) (0xc0027d1e00) Stream removed, broadcasting: 1
I0506 19:08:21.976485       7 log.go:172] (0xc005780370) (0xc002389b80) Stream removed, broadcasting: 3
I0506 19:08:21.976507       7 log.go:172] (0xc005780370) (0xc002389c20) Stream removed, broadcasting: 5
May  6 19:08:21.976: INFO: Exec stderr: ""
May  6 19:08:21.976: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9852 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 19:08:21.976: INFO: >>> kubeConfig: /root/.kube/config
I0506 19:08:21.976614       7 log.go:172] (0xc005780370) Go away received
I0506 19:08:22.001920       7 log.go:172] (0xc005780630) (0xc0027d1ea0) Create stream
I0506 19:08:22.001944       7 log.go:172] (0xc005780630) (0xc0027d1ea0) Stream added, broadcasting: 1
I0506 19:08:22.004232       7 log.go:172] (0xc005780630) Reply frame received for 1
I0506 19:08:22.004282       7 log.go:172] (0xc005780630) (0xc000ebbe00) Create stream
I0506 19:08:22.004298       7 log.go:172] (0xc005780630) (0xc000ebbe00) Stream added, broadcasting: 3
I0506 19:08:22.005624       7 log.go:172] (0xc005780630) Reply frame received for 3
I0506 19:08:22.005674       7 log.go:172] (0xc005780630) (0xc00227ed20) Create stream
I0506 19:08:22.005689       7 log.go:172] (0xc005780630) (0xc00227ed20) Stream added, broadcasting: 5
I0506 19:08:22.006708       7 log.go:172] (0xc005780630) Reply frame received for 5
I0506 19:08:22.079447       7 log.go:172] (0xc005780630) Data frame received for 5
I0506 19:08:22.079482       7 log.go:172] (0xc00227ed20) (5) Data frame handling
I0506 19:08:22.079517       7 log.go:172] (0xc005780630) Data frame received for 3
I0506 19:08:22.079545       7 log.go:172] (0xc000ebbe00) (3) Data frame handling
I0506 19:08:22.079564       7 log.go:172] (0xc000ebbe00) (3) Data frame sent
I0506 19:08:22.079581       7 log.go:172] (0xc005780630) Data frame received for 3
I0506 19:08:22.079596       7 log.go:172] (0xc000ebbe00) (3) Data frame handling
I0506 19:08:22.080789       7 log.go:172] (0xc005780630) Data frame received for 1
I0506 19:08:22.080807       7 log.go:172] (0xc0027d1ea0) (1) Data frame handling
I0506 19:08:22.080817       7 log.go:172] (0xc0027d1ea0) (1) Data frame sent
I0506 19:08:22.080830       7 log.go:172] (0xc005780630) (0xc0027d1ea0) Stream removed, broadcasting: 1
I0506 19:08:22.080911       7 log.go:172] (0xc005780630) (0xc0027d1ea0) Stream removed, broadcasting: 1
I0506 19:08:22.080921       7 log.go:172] (0xc005780630) (0xc000ebbe00) Stream removed, broadcasting: 3
I0506 19:08:22.081031       7 log.go:172] (0xc005780630) (0xc00227ed20) Stream removed, broadcasting: 5
May  6 19:08:22.081: INFO: Exec stderr: ""
May  6 19:08:22.081: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9852 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 19:08:22.081: INFO: >>> kubeConfig: /root/.kube/config
I0506 19:08:22.081606       7 log.go:172] (0xc005780630) Go away received
I0506 19:08:22.109071       7 log.go:172] (0xc005780d10) (0xc0001959a0) Create stream
I0506 19:08:22.109108       7 log.go:172] (0xc005780d10) (0xc0001959a0) Stream added, broadcasting: 1
I0506 19:08:22.111415       7 log.go:172] (0xc005780d10) Reply frame received for 1
I0506 19:08:22.111442       7 log.go:172] (0xc005780d10) (0xc002389d60) Create stream
I0506 19:08:22.111455       7 log.go:172] (0xc005780d10) (0xc002389d60) Stream added, broadcasting: 3
I0506 19:08:22.112344       7 log.go:172] (0xc005780d10) Reply frame received for 3
I0506 19:08:22.112377       7 log.go:172] (0xc005780d10) (0xc002389e00) Create stream
I0506 19:08:22.112388       7 log.go:172] (0xc005780d10) (0xc002389e00) Stream added, broadcasting: 5
I0506 19:08:22.113296       7 log.go:172] (0xc005780d10) Reply frame received for 5
I0506 19:08:22.166818       7 log.go:172] (0xc005780d10) Data frame received for 3
I0506 19:08:22.166851       7 log.go:172] (0xc002389d60) (3) Data frame handling
I0506 19:08:22.166869       7 log.go:172] (0xc002389d60) (3) Data frame sent
I0506 19:08:22.166888       7 log.go:172] (0xc005780d10) Data frame received for 3
I0506 19:08:22.166902       7 log.go:172] (0xc002389d60) (3) Data frame handling
I0506 19:08:22.166931       7 log.go:172] (0xc005780d10) Data frame received for 5
I0506 19:08:22.166956       7 log.go:172] (0xc002389e00) (5) Data frame handling
I0506 19:08:22.168056       7 log.go:172] (0xc005780d10) Data frame received for 1
I0506 19:08:22.168085       7 log.go:172] (0xc0001959a0) (1) Data frame handling
I0506 19:08:22.168101       7 log.go:172] (0xc0001959a0) (1) Data frame sent
I0506 19:08:22.168116       7 log.go:172] (0xc005780d10) (0xc0001959a0) Stream removed, broadcasting: 1
I0506 19:08:22.168129       7 log.go:172] (0xc005780d10) Go away received
I0506 19:08:22.168322       7 log.go:172] (0xc005780d10) (0xc0001959a0) Stream removed, broadcasting: 1
I0506 19:08:22.168345       7 log.go:172] (0xc005780d10) (0xc002389d60) Stream removed, broadcasting: 3
I0506 19:08:22.168359       7 log.go:172] (0xc005780d10) (0xc002389e00) Stream removed, broadcasting: 5
May  6 19:08:22.168: INFO: Exec stderr: ""
May  6 19:08:22.168: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9852 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 19:08:22.168: INFO: >>> kubeConfig: /root/.kube/config
I0506 19:08:22.196696       7 log.go:172] (0xc005fa82c0) (0xc0003be000) Create stream
I0506 19:08:22.196735       7 log.go:172] (0xc005fa82c0) (0xc0003be000) Stream added, broadcasting: 1
I0506 19:08:22.199412       7 log.go:172] (0xc005fa82c0) Reply frame received for 1
I0506 19:08:22.199446       7 log.go:172] (0xc005fa82c0) (0xc002389ea0) Create stream
I0506 19:08:22.199458       7 log.go:172] (0xc005fa82c0) (0xc002389ea0) Stream added, broadcasting: 3
I0506 19:08:22.200299       7 log.go:172] (0xc005fa82c0) Reply frame received for 3
I0506 19:08:22.200347       7 log.go:172] (0xc005fa82c0) (0xc00227ee60) Create stream
I0506 19:08:22.200361       7 log.go:172] (0xc005fa82c0) (0xc00227ee60) Stream added, broadcasting: 5
I0506 19:08:22.201447       7 log.go:172] (0xc005fa82c0) Reply frame received for 5
I0506 19:08:22.268096       7 log.go:172] (0xc005fa82c0) Data frame received for 5
I0506 19:08:22.268166       7 log.go:172] (0xc00227ee60) (5) Data frame handling
I0506 19:08:22.268221       7 log.go:172] (0xc005fa82c0) Data frame received for 3
I0506 19:08:22.268254       7 log.go:172] (0xc002389ea0) (3) Data frame handling
I0506 19:08:22.268283       7 log.go:172] (0xc002389ea0) (3) Data frame sent
I0506 19:08:22.268298       7 log.go:172] (0xc005fa82c0) Data frame received for 3
I0506 19:08:22.268308       7 log.go:172] (0xc002389ea0) (3) Data frame handling
I0506 19:08:22.270201       7 log.go:172] (0xc005fa82c0) Data frame received for 1
I0506 19:08:22.270237       7 log.go:172] (0xc0003be000) (1) Data frame handling
I0506 19:08:22.270271       7 log.go:172] (0xc0003be000) (1) Data frame sent
I0506 19:08:22.270316       7 log.go:172] (0xc005fa82c0) (0xc0003be000) Stream removed, broadcasting: 1
I0506 19:08:22.270355       7 log.go:172] (0xc005fa82c0) Go away received
I0506 19:08:22.270452       7 log.go:172] (0xc005fa82c0) (0xc0003be000) Stream removed, broadcasting: 1
I0506 19:08:22.270490       7 log.go:172] (0xc005fa82c0) (0xc002389ea0) Stream removed, broadcasting: 3
I0506 19:08:22.270513       7 log.go:172] (0xc005fa82c0) (0xc00227ee60) Stream removed, broadcasting: 5
May  6 19:08:22.270: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
May  6 19:08:22.270: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9852 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 19:08:22.270: INFO: >>> kubeConfig: /root/.kube/config
I0506 19:08:22.413066       7 log.go:172] (0xc002d4c9a0) (0xc000bd2500) Create stream
I0506 19:08:22.413098       7 log.go:172] (0xc002d4c9a0) (0xc000bd2500) Stream added, broadcasting: 1
I0506 19:08:22.415288       7 log.go:172] (0xc002d4c9a0) Reply frame received for 1
I0506 19:08:22.415332       7 log.go:172] (0xc002d4c9a0) (0xc000bc40a0) Create stream
I0506 19:08:22.415344       7 log.go:172] (0xc002d4c9a0) (0xc000bc40a0) Stream added, broadcasting: 3
I0506 19:08:22.416241       7 log.go:172] (0xc002d4c9a0) Reply frame received for 3
I0506 19:08:22.416272       7 log.go:172] (0xc002d4c9a0) (0xc000195c20) Create stream
I0506 19:08:22.416283       7 log.go:172] (0xc002d4c9a0) (0xc000195c20) Stream added, broadcasting: 5
I0506 19:08:22.416990       7 log.go:172] (0xc002d4c9a0) Reply frame received for 5
I0506 19:08:22.488928       7 log.go:172] (0xc002d4c9a0) Data frame received for 5
I0506 19:08:22.488962       7 log.go:172] (0xc000195c20) (5) Data frame handling
I0506 19:08:22.489002       7 log.go:172] (0xc002d4c9a0) Data frame received for 3
I0506 19:08:22.489015       7 log.go:172] (0xc000bc40a0) (3) Data frame handling
I0506 19:08:22.489025       7 log.go:172] (0xc000bc40a0) (3) Data frame sent
I0506 19:08:22.489033       7 log.go:172] (0xc002d4c9a0) Data frame received for 3
I0506 19:08:22.489040       7 log.go:172] (0xc000bc40a0) (3) Data frame handling
I0506 19:08:22.490399       7 log.go:172] (0xc002d4c9a0) Data frame received for 1
I0506 19:08:22.490433       7 log.go:172] (0xc000bd2500) (1) Data frame handling
I0506 19:08:22.490445       7 log.go:172] (0xc000bd2500) (1) Data frame sent
I0506 19:08:22.490461       7 log.go:172] (0xc002d4c9a0) (0xc000bd2500) Stream removed, broadcasting: 1
I0506 19:08:22.490484       7 log.go:172] (0xc002d4c9a0) Go away received
I0506 19:08:22.490550       7 log.go:172] (0xc002d4c9a0) (0xc000bd2500) Stream removed, broadcasting: 1
I0506 19:08:22.490565       7 log.go:172] (0xc002d4c9a0) (0xc000bc40a0) Stream removed, broadcasting: 3
I0506 19:08:22.490575       7 log.go:172] (0xc002d4c9a0) (0xc000195c20) Stream removed, broadcasting: 5
May  6 19:08:22.490: INFO: Exec stderr: ""
May  6 19:08:22.490: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9852 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 19:08:22.490: INFO: >>> kubeConfig: /root/.kube/config
I0506 19:08:22.570070       7 log.go:172] (0xc002aa2d10) (0xc00227f220) Create stream
I0506 19:08:22.570111       7 log.go:172] (0xc002aa2d10) (0xc00227f220) Stream added, broadcasting: 1
I0506 19:08:22.572248       7 log.go:172] (0xc002aa2d10) Reply frame received for 1
I0506 19:08:22.572277       7 log.go:172] (0xc002aa2d10) (0xc000bd2640) Create stream
I0506 19:08:22.572284       7 log.go:172] (0xc002aa2d10) (0xc000bd2640) Stream added, broadcasting: 3
I0506 19:08:22.573051       7 log.go:172] (0xc002aa2d10) Reply frame received for 3
I0506 19:08:22.573088       7 log.go:172] (0xc002aa2d10) (0xc000bc4140) Create stream
I0506 19:08:22.573102       7 log.go:172] (0xc002aa2d10) (0xc000bc4140) Stream added, broadcasting: 5
I0506 19:08:22.574280       7 log.go:172] (0xc002aa2d10) Reply frame received for 5
I0506 19:08:22.629799       7 log.go:172] (0xc002aa2d10) Data frame received for 5
I0506 19:08:22.629841       7 log.go:172] (0xc000bc4140) (5) Data frame handling
I0506 19:08:22.629902       7 log.go:172] (0xc002aa2d10) Data frame received for 3
I0506 19:08:22.629926       7 log.go:172] (0xc000bd2640) (3) Data frame handling
I0506 19:08:22.629940       7 log.go:172] (0xc000bd2640) (3) Data frame sent
I0506 19:08:22.629952       7 log.go:172] (0xc002aa2d10) Data frame received for 3
I0506 19:08:22.629961       7 log.go:172] (0xc000bd2640) (3) Data frame handling
I0506 19:08:22.631553       7 log.go:172] (0xc002aa2d10) Data frame received for 1
I0506 19:08:22.631588       7 log.go:172] (0xc00227f220) (1) Data frame handling
I0506 19:08:22.631607       7 log.go:172] (0xc00227f220) (1) Data frame sent
I0506 19:08:22.631620       7 log.go:172] (0xc002aa2d10) (0xc00227f220) Stream removed, broadcasting: 1
I0506 19:08:22.631633       7 log.go:172] (0xc002aa2d10) Go away received
I0506 19:08:22.631733       7 log.go:172] (0xc002aa2d10) (0xc00227f220) Stream removed, broadcasting: 1
I0506 19:08:22.631749       7 log.go:172] (0xc002aa2d10) (0xc000bd2640) Stream removed, broadcasting: 3
I0506 19:08:22.631762       7 log.go:172] (0xc002aa2d10) (0xc000bc4140) Stream removed, broadcasting: 5
May  6 19:08:22.631: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
May  6 19:08:22.631: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9852 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 19:08:22.631: INFO: >>> kubeConfig: /root/.kube/config
I0506 19:08:22.667169       7 log.go:172] (0xc002aa3340) (0xc00227f4a0) Create stream
I0506 19:08:22.667215       7 log.go:172] (0xc002aa3340) (0xc00227f4a0) Stream added, broadcasting: 1
I0506 19:08:22.669797       7 log.go:172] (0xc002aa3340) Reply frame received for 1
I0506 19:08:22.669860       7 log.go:172] (0xc002aa3340) (0xc0013d4140) Create stream
I0506 19:08:22.669878       7 log.go:172] (0xc002aa3340) (0xc0013d4140) Stream added, broadcasting: 3
I0506 19:08:22.670872       7 log.go:172] (0xc002aa3340) Reply frame received for 3
I0506 19:08:22.670908       7 log.go:172] (0xc002aa3340) (0xc000bc4280) Create stream
I0506 19:08:22.670921       7 log.go:172] (0xc002aa3340) (0xc000bc4280) Stream added, broadcasting: 5
I0506 19:08:22.671785       7 log.go:172] (0xc002aa3340) Reply frame received for 5
I0506 19:08:22.739007       7 log.go:172] (0xc002aa3340) Data frame received for 5
I0506 19:08:22.739072       7 log.go:172] (0xc000bc4280) (5) Data frame handling
I0506 19:08:22.739110       7 log.go:172] (0xc002aa3340) Data frame received for 3
I0506 19:08:22.739174       7 log.go:172] (0xc0013d4140) (3) Data frame handling
I0506 19:08:22.739202       7 log.go:172] (0xc0013d4140) (3) Data frame sent
I0506 19:08:22.739219       7 log.go:172] (0xc002aa3340) Data frame received for 3
I0506 19:08:22.739233       7 log.go:172] (0xc0013d4140) (3) Data frame handling
I0506 19:08:22.740600       7 log.go:172] (0xc002aa3340) Data frame received for 1
I0506 19:08:22.740617       7 log.go:172] (0xc00227f4a0) (1) Data frame handling
I0506 19:08:22.740628       7 log.go:172] (0xc00227f4a0) (1) Data frame sent
I0506 19:08:22.740758       7 log.go:172] (0xc002aa3340) (0xc00227f4a0) Stream removed, broadcasting: 1
I0506 19:08:22.740817       7 log.go:172] (0xc002aa3340) Go away received
I0506 19:08:22.740849       7 log.go:172] (0xc002aa3340) (0xc00227f4a0) Stream removed, broadcasting: 1
I0506 19:08:22.740863       7 log.go:172] (0xc002aa3340) (0xc0013d4140) Stream removed, broadcasting: 3
I0506 19:08:22.740875       7 log.go:172] (0xc002aa3340) (0xc000bc4280) Stream removed, broadcasting: 5
May  6 19:08:22.740: INFO: Exec stderr: ""
May  6 19:08:22.740: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9852 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 19:08:22.740: INFO: >>> kubeConfig: /root/.kube/config
I0506 19:08:22.768794       7 log.go:172] (0xc002aa3970) (0xc00227f680) Create stream
I0506 19:08:22.768822       7 log.go:172] (0xc002aa3970) (0xc00227f680) Stream added, broadcasting: 1
I0506 19:08:22.770959       7 log.go:172] (0xc002aa3970) Reply frame received for 1
I0506 19:08:22.770992       7 log.go:172] (0xc002aa3970) (0xc000bc43c0) Create stream
I0506 19:08:22.771002       7 log.go:172] (0xc002aa3970) (0xc000bc43c0) Stream added, broadcasting: 3
I0506 19:08:22.771730       7 log.go:172] (0xc002aa3970) Reply frame received for 3
I0506 19:08:22.771776       7 log.go:172] (0xc002aa3970) (0xc0029960a0) Create stream
I0506 19:08:22.771796       7 log.go:172] (0xc002aa3970) (0xc0029960a0) Stream added, broadcasting: 5
I0506 19:08:22.772533       7 log.go:172] (0xc002aa3970) Reply frame received for 5
I0506 19:08:22.837362       7 log.go:172] (0xc002aa3970) Data frame received for 5
I0506 19:08:22.837398       7 log.go:172] (0xc0029960a0) (5) Data frame handling
I0506 19:08:22.837421       7 log.go:172] (0xc002aa3970) Data frame received for 3
I0506 19:08:22.837431       7 log.go:172] (0xc000bc43c0) (3) Data frame handling
I0506 19:08:22.837442       7 log.go:172] (0xc000bc43c0) (3) Data frame sent
I0506 19:08:22.837457       7 log.go:172] (0xc002aa3970) Data frame received for 3
I0506 19:08:22.837473       7 log.go:172] (0xc000bc43c0) (3) Data frame handling
I0506 19:08:22.839607       7 log.go:172] (0xc002aa3970) Data frame received for 1
I0506 19:08:22.839640       7 log.go:172] (0xc00227f680) (1) Data frame handling
I0506 19:08:22.839658       7 log.go:172] (0xc00227f680) (1) Data frame sent
I0506 19:08:22.839675       7 log.go:172] (0xc002aa3970) (0xc00227f680) Stream removed, broadcasting: 1
I0506 19:08:22.839698       7 log.go:172] (0xc002aa3970) Go away received
I0506 19:08:22.839819       7 log.go:172] (0xc002aa3970) (0xc00227f680) Stream removed, broadcasting: 1
I0506 19:08:22.839836       7 log.go:172] (0xc002aa3970) (0xc000bc43c0) Stream removed, broadcasting: 3
I0506 19:08:22.839846       7 log.go:172] (0xc002aa3970) (0xc0029960a0) Stream removed, broadcasting: 5
May  6 19:08:22.839: INFO: Exec stderr: ""
May  6 19:08:22.839: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9852 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 19:08:22.839: INFO: >>> kubeConfig: /root/.kube/config
I0506 19:08:22.867277       7 log.go:172] (0xc004f54000) (0xc00227f900) Create stream
I0506 19:08:22.867309       7 log.go:172] (0xc004f54000) (0xc00227f900) Stream added, broadcasting: 1
I0506 19:08:22.869647       7 log.go:172] (0xc004f54000) Reply frame received for 1
I0506 19:08:22.869693       7 log.go:172] (0xc004f54000) (0xc00227f9a0) Create stream
I0506 19:08:22.869706       7 log.go:172] (0xc004f54000) (0xc00227f9a0) Stream added, broadcasting: 3
I0506 19:08:22.870556       7 log.go:172] (0xc004f54000) Reply frame received for 3
I0506 19:08:22.870594       7 log.go:172] (0xc004f54000) (0xc00227fa40) Create stream
I0506 19:08:22.870606       7 log.go:172] (0xc004f54000) (0xc00227fa40) Stream added, broadcasting: 5
I0506 19:08:22.871464       7 log.go:172] (0xc004f54000) Reply frame received for 5
I0506 19:08:22.931894       7 log.go:172] (0xc004f54000) Data frame received for 5
I0506 19:08:22.931915       7 log.go:172] (0xc00227fa40) (5) Data frame handling
I0506 19:08:22.931962       7 log.go:172] (0xc004f54000) Data frame received for 3
I0506 19:08:22.931987       7 log.go:172] (0xc00227f9a0) (3) Data frame handling
I0506 19:08:22.931998       7 log.go:172] (0xc00227f9a0) (3) Data frame sent
I0506 19:08:22.932003       7 log.go:172] (0xc004f54000) Data frame received for 3
I0506 19:08:22.932007       7 log.go:172] (0xc00227f9a0) (3) Data frame handling
I0506 19:08:22.933703       7 log.go:172] (0xc004f54000) Data frame received for 1
I0506 19:08:22.933726       7 log.go:172] (0xc00227f900) (1) Data frame handling
I0506 19:08:22.933737       7 log.go:172] (0xc00227f900) (1) Data frame sent
I0506 19:08:22.933749       7 log.go:172] (0xc004f54000) (0xc00227f900) Stream removed, broadcasting: 1
I0506 19:08:22.933800       7 log.go:172] (0xc004f54000) Go away received
I0506 19:08:22.933841       7 log.go:172] (0xc004f54000) (0xc00227f900) Stream removed, broadcasting: 1
I0506 19:08:22.933861       7 log.go:172] (0xc004f54000) (0xc00227f9a0) Stream removed, broadcasting: 3
I0506 19:08:22.933878       7 log.go:172] (0xc004f54000) (0xc00227fa40) Stream removed, broadcasting: 5
May  6 19:08:22.933: INFO: Exec stderr: ""
May  6 19:08:22.933: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9852 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 19:08:22.933: INFO: >>> kubeConfig: /root/.kube/config
I0506 19:08:22.961997       7 log.go:172] (0xc005781290) (0xc0029963c0) Create stream
I0506 19:08:22.962035       7 log.go:172] (0xc005781290) (0xc0029963c0) Stream added, broadcasting: 1
I0506 19:08:22.964178       7 log.go:172] (0xc005781290) Reply frame received for 1
I0506 19:08:22.964214       7 log.go:172] (0xc005781290) (0xc0013d41e0) Create stream
I0506 19:08:22.964236       7 log.go:172] (0xc005781290) (0xc0013d41e0) Stream added, broadcasting: 3
I0506 19:08:22.965277       7 log.go:172] (0xc005781290) Reply frame received for 3
I0506 19:08:22.965320       7 log.go:172] (0xc005781290) (0xc0013d4280) Create stream
I0506 19:08:22.965332       7 log.go:172] (0xc005781290) (0xc0013d4280) Stream added, broadcasting: 5
I0506 19:08:22.966368       7 log.go:172] (0xc005781290) Reply frame received for 5
I0506 19:08:23.033315       7 log.go:172] (0xc005781290) Data frame received for 5
I0506 19:08:23.033497       7 log.go:172] (0xc0013d4280) (5) Data frame handling
I0506 19:08:23.033515       7 log.go:172] (0xc005781290) Data frame received for 3
I0506 19:08:23.033521       7 log.go:172] (0xc0013d41e0) (3) Data frame handling
I0506 19:08:23.033536       7 log.go:172] (0xc0013d41e0) (3) Data frame sent
I0506 19:08:23.033555       7 log.go:172] (0xc005781290) Data frame received for 3
I0506 19:08:23.033560       7 log.go:172] (0xc0013d41e0) (3) Data frame handling
I0506 19:08:23.035231       7 log.go:172] (0xc005781290) Data frame received for 1
I0506 19:08:23.035246       7 log.go:172] (0xc0029963c0) (1) Data frame handling
I0506 19:08:23.035254       7 log.go:172] (0xc0029963c0) (1) Data frame sent
I0506 19:08:23.035422       7 log.go:172] (0xc005781290) (0xc0029963c0) Stream removed, broadcasting: 1
I0506 19:08:23.035464       7 log.go:172] (0xc005781290) Go away received
I0506 19:08:23.035563       7 log.go:172] (0xc005781290) (0xc0029963c0) Stream removed, broadcasting: 1
I0506 19:08:23.035592       7 log.go:172] (0xc005781290) (0xc0013d41e0) Stream removed, broadcasting: 3
I0506 19:08:23.035607       7 log.go:172] (0xc005781290) (0xc0013d4280) Stream removed, broadcasting: 5
May  6 19:08:23.035: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:08:23.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-9852" for this suite.

• [SLOW TEST:15.713 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4638,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:08:23.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
May  6 19:08:24.370: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
May  6 19:08:26.380: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388904, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388904, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388904, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388903, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 19:08:28.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388904, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388904, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388904, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388903, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  6 19:08:31.412: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  6 19:08:31.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:08:34.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-2563" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:11.835 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":272,"skipped":4645,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:08:34.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
May  6 19:08:35.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3745'
May  6 19:08:36.004: INFO: stderr: ""
May  6 19:08:36.004: INFO: stdout: "pod/pause created\n"
May  6 19:08:36.004: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
May  6 19:08:36.004: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3745" to be "running and ready"
May  6 19:08:36.400: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 395.594747ms
May  6 19:08:38.404: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.399566385s
May  6 19:08:40.408: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.403844867s
May  6 19:08:40.408: INFO: Pod "pause" satisfied condition "running and ready"
May  6 19:08:40.408: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
May  6 19:08:40.408: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3745'
May  6 19:08:40.510: INFO: stderr: ""
May  6 19:08:40.510: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
May  6 19:08:40.510: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3745'
May  6 19:08:40.885: INFO: stderr: ""
May  6 19:08:40.885: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    testing-label-value\n"
STEP: removing the label testing-label of a pod
May  6 19:08:40.885: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3745'
May  6 19:08:41.095: INFO: stderr: ""
May  6 19:08:41.095: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
May  6 19:08:41.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3745'
May  6 19:08:41.313: INFO: stderr: ""
May  6 19:08:41.313: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          6s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
May  6 19:08:41.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3745'
May  6 19:08:41.542: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  6 19:08:41.542: INFO: stdout: "pod \"pause\" force deleted\n"
May  6 19:08:41.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3745'
May  6 19:08:41.673: INFO: stderr: "No resources found in kubectl-3745 namespace.\n"
May  6 19:08:41.673: INFO: stdout: ""
May  6 19:08:41.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3745 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May  6 19:08:41.768: INFO: stderr: ""
May  6 19:08:41.768: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:08:41.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3745" for this suite.

• [SLOW TEST:6.896 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":273,"skipped":4675,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:08:41.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-8239
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 19:08:41.916: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 19:08:43.591: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 19:08:45.595: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 19:08:47.603: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 19:08:49.596: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 19:08:51.874: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 19:08:53.639: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 19:08:56.191: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 19:08:57.596: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 19:08:59.596: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 19:08:59.602: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  6 19:09:01.693: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  6 19:09:03.606: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  6 19:09:05.606: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 19:09:12.092: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.171:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8239 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 19:09:12.092: INFO: >>> kubeConfig: /root/.kube/config
I0506 19:09:12.125816       7 log.go:172] (0xc004f54630) (0xc001020c80) Create stream
I0506 19:09:12.125854       7 log.go:172] (0xc004f54630) (0xc001020c80) Stream added, broadcasting: 1
I0506 19:09:12.127725       7 log.go:172] (0xc004f54630) Reply frame received for 1
I0506 19:09:12.127753       7 log.go:172] (0xc004f54630) (0xc001020d20) Create stream
I0506 19:09:12.127763       7 log.go:172] (0xc004f54630) (0xc001020d20) Stream added, broadcasting: 3
I0506 19:09:12.129102       7 log.go:172] (0xc004f54630) Reply frame received for 3
I0506 19:09:12.129355       7 log.go:172] (0xc004f54630) (0xc0010ad040) Create stream
I0506 19:09:12.129380       7 log.go:172] (0xc004f54630) (0xc0010ad040) Stream added, broadcasting: 5
I0506 19:09:12.130947       7 log.go:172] (0xc004f54630) Reply frame received for 5
I0506 19:09:12.201639       7 log.go:172] (0xc004f54630) Data frame received for 3
I0506 19:09:12.201675       7 log.go:172] (0xc001020d20) (3) Data frame handling
I0506 19:09:12.201692       7 log.go:172] (0xc001020d20) (3) Data frame sent
I0506 19:09:12.202060       7 log.go:172] (0xc004f54630) Data frame received for 3
I0506 19:09:12.202084       7 log.go:172] (0xc001020d20) (3) Data frame handling
I0506 19:09:12.202103       7 log.go:172] (0xc004f54630) Data frame received for 5
I0506 19:09:12.202111       7 log.go:172] (0xc0010ad040) (5) Data frame handling
I0506 19:09:12.203386       7 log.go:172] (0xc004f54630) Data frame received for 1
I0506 19:09:12.203401       7 log.go:172] (0xc001020c80) (1) Data frame handling
I0506 19:09:12.203413       7 log.go:172] (0xc001020c80) (1) Data frame sent
I0506 19:09:12.203437       7 log.go:172] (0xc004f54630) (0xc001020c80) Stream removed, broadcasting: 1
I0506 19:09:12.203456       7 log.go:172] (0xc004f54630) Go away received
I0506 19:09:12.203599       7 log.go:172] (0xc004f54630) (0xc001020c80) Stream removed, broadcasting: 1
I0506 19:09:12.203625       7 log.go:172] (0xc004f54630) (0xc001020d20) Stream removed, broadcasting: 3
I0506 19:09:12.203633       7 log.go:172] (0xc004f54630) (0xc0010ad040) Stream removed, broadcasting: 5
May  6 19:09:12.203: INFO: Found all expected endpoints: [netserver-0]
May  6 19:09:12.205: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.188:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8239 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  6 19:09:12.205: INFO: >>> kubeConfig: /root/.kube/config
I0506 19:09:12.228895       7 log.go:172] (0xc004f54c60) (0xc001021720) Create stream
I0506 19:09:12.228927       7 log.go:172] (0xc004f54c60) (0xc001021720) Stream added, broadcasting: 1
I0506 19:09:12.230805       7 log.go:172] (0xc004f54c60) Reply frame received for 1
I0506 19:09:12.230849       7 log.go:172] (0xc004f54c60) (0xc000bc4640) Create stream
I0506 19:09:12.230864       7 log.go:172] (0xc004f54c60) (0xc000bc4640) Stream added, broadcasting: 3
I0506 19:09:12.231913       7 log.go:172] (0xc004f54c60) Reply frame received for 3
I0506 19:09:12.231951       7 log.go:172] (0xc004f54c60) (0xc0010ad360) Create stream
I0506 19:09:12.231963       7 log.go:172] (0xc004f54c60) (0xc0010ad360) Stream added, broadcasting: 5
I0506 19:09:12.232888       7 log.go:172] (0xc004f54c60) Reply frame received for 5
I0506 19:09:12.293378       7 log.go:172] (0xc004f54c60) Data frame received for 5
I0506 19:09:12.293420       7 log.go:172] (0xc0010ad360) (5) Data frame handling
I0506 19:09:12.293445       7 log.go:172] (0xc004f54c60) Data frame received for 3
I0506 19:09:12.293456       7 log.go:172] (0xc000bc4640) (3) Data frame handling
I0506 19:09:12.293473       7 log.go:172] (0xc000bc4640) (3) Data frame sent
I0506 19:09:12.293494       7 log.go:172] (0xc004f54c60) Data frame received for 3
I0506 19:09:12.293523       7 log.go:172] (0xc000bc4640) (3) Data frame handling
I0506 19:09:12.294664       7 log.go:172] (0xc004f54c60) Data frame received for 1
I0506 19:09:12.294688       7 log.go:172] (0xc001021720) (1) Data frame handling
I0506 19:09:12.294702       7 log.go:172] (0xc001021720) (1) Data frame sent
I0506 19:09:12.294729       7 log.go:172] (0xc004f54c60) (0xc001021720) Stream removed, broadcasting: 1
I0506 19:09:12.294749       7 log.go:172] (0xc004f54c60) Go away received
I0506 19:09:12.294885       7 log.go:172] (0xc004f54c60) (0xc001021720) Stream removed, broadcasting: 1
I0506 19:09:12.294911       7 log.go:172] (0xc004f54c60) (0xc000bc4640) Stream removed, broadcasting: 3
I0506 19:09:12.294928       7 log.go:172] (0xc004f54c60) (0xc0010ad360) Stream removed, broadcasting: 5
May  6 19:09:12.294: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:09:12.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8239" for this suite.

• [SLOW TEST:30.526 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4706,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  6 19:09:12.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  6 19:09:24.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3941" for this suite.

• [SLOW TEST:12.072 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":275,"skipped":4717,"failed":0}
May  6 19:09:24.375: INFO: Running AfterSuite actions on all nodes
May  6 19:09:24.375: INFO: Running AfterSuite actions on node 1
May  6 19:09:24.375: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 5666.271 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS