I0826 16:14:39.334567 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0826 16:14:39.349799 7 e2e.go:124] Starting e2e run "14a5f9cd-b16e-4efd-8686-051fc7845f57" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598458478 - Will randomize all specs Will run 275 of 4992 specs Aug 26 16:14:39.410: INFO: >>> kubeConfig: /root/.kube/config Aug 26 16:14:39.412: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 26 16:14:39.437: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 26 16:14:39.471: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 26 16:14:39.471: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 26 16:14:39.471: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 26 16:14:39.476: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 26 16:14:39.476: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 26 16:14:39.476: INFO: e2e test version: v1.18.8 Aug 26 16:14:39.477: INFO: kube-apiserver version: v1.18.8 Aug 26 16:14:39.477: INFO: >>> kubeConfig: /root/.kube/config Aug 26 16:14:39.481: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:14:39.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Aug 26 16:14:40.551: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 26 16:14:46.534: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:14:46.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2938" for this suite. • [SLOW TEST:7.282 seconds] [k8s.io] Container Runtime /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":8,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:14:46.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 26 16:14:47.059: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:14:57.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2063" for this suite. • [SLOW TEST:10.787 seconds] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":2,"skipped":27,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:14:57.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Aug 26 16:14:58.162: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:15:26.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2376" for this suite. • [SLOW TEST:29.506 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":3,"skipped":30,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:15:27.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:15:34.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5782" for this suite. • [SLOW TEST:8.060 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":4,"skipped":34,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:15:35.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 26 16:15:36.500: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created Aug 26 16:15:38.737: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055336, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055336, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055336, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055336, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 16:15:41.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055336, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055336, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055336, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055336, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 16:15:43.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055336, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055336, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055336, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055336, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 16:15:44.755: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055336, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055336, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055336, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055336, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 26 16:15:48.188: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 26 16:15:48.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:15:50.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2674" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:15.231 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":5,"skipped":36,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:15:50.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 26 16:15:50.507: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c2b25378-655b-4bb2-a104-0ed86d2f1ce4" in namespace "projected-8756" to be "Succeeded or Failed" Aug 26 16:15:50.546: INFO: Pod "downwardapi-volume-c2b25378-655b-4bb2-a104-0ed86d2f1ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 39.747354ms Aug 26 16:15:52.899: INFO: Pod "downwardapi-volume-c2b25378-655b-4bb2-a104-0ed86d2f1ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.39272267s Aug 26 16:15:55.214: INFO: Pod "downwardapi-volume-c2b25378-655b-4bb2-a104-0ed86d2f1ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.70680839s Aug 26 16:15:58.008: INFO: Pod "downwardapi-volume-c2b25378-655b-4bb2-a104-0ed86d2f1ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.501365272s Aug 26 16:16:00.200: INFO: Pod "downwardapi-volume-c2b25378-655b-4bb2-a104-0ed86d2f1ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.692805876s Aug 26 16:16:02.268: INFO: Pod "downwardapi-volume-c2b25378-655b-4bb2-a104-0ed86d2f1ce4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.761002649s STEP: Saw pod success Aug 26 16:16:02.268: INFO: Pod "downwardapi-volume-c2b25378-655b-4bb2-a104-0ed86d2f1ce4" satisfied condition "Succeeded or Failed" Aug 26 16:16:02.282: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-c2b25378-655b-4bb2-a104-0ed86d2f1ce4 container client-container: STEP: delete the pod Aug 26 16:16:02.598: INFO: Waiting for pod downwardapi-volume-c2b25378-655b-4bb2-a104-0ed86d2f1ce4 to disappear Aug 26 16:16:02.611: INFO: Pod downwardapi-volume-c2b25378-655b-4bb2-a104-0ed86d2f1ce4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:16:02.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8756" for this suite. • [SLOW TEST:12.272 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":45,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:16:02.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-e32402ed-3cd9-4257-8519-e7758ac65f6d in namespace container-probe-5167 Aug 26 16:16:11.076: INFO: Started pod liveness-e32402ed-3cd9-4257-8519-e7758ac65f6d in namespace container-probe-5167 STEP: checking the pod's current state and verifying that restartCount is present Aug 26 16:16:11.082: INFO: Initial restart count of pod liveness-e32402ed-3cd9-4257-8519-e7758ac65f6d is 0 Aug 26 16:16:29.005: INFO: Restart count of pod container-probe-5167/liveness-e32402ed-3cd9-4257-8519-e7758ac65f6d is now 1 (17.922845695s elapsed) Aug 26 16:16:51.405: INFO: Restart count of pod container-probe-5167/liveness-e32402ed-3cd9-4257-8519-e7758ac65f6d is now 2 (40.322050029s elapsed) Aug 26 16:17:06.645: INFO: Restart count of pod container-probe-5167/liveness-e32402ed-3cd9-4257-8519-e7758ac65f6d is now 3 (55.562858269s elapsed) Aug 26 16:17:27.373: INFO: Restart count of pod container-probe-5167/liveness-e32402ed-3cd9-4257-8519-e7758ac65f6d is now 4 (1m16.290953219s elapsed) Aug 26 16:18:39.230: INFO: Restart count of pod container-probe-5167/liveness-e32402ed-3cd9-4257-8519-e7758ac65f6d is now 5 (2m28.147518548s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:18:39.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5167" for this suite. • [SLOW TEST:156.794 seconds] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":95,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:18:39.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-be14782d-e629-4c8d-8b61-554509ac4fa6 STEP: Creating a pod to test consume secrets Aug 26 16:18:40.605: INFO: Waiting up to 5m0s for pod "pod-secrets-2f7d3d71-9cba-4878-af55-710d069fd4e4" in namespace "secrets-7758" to be "Succeeded or Failed" Aug 26 16:18:40.880: INFO: Pod "pod-secrets-2f7d3d71-9cba-4878-af55-710d069fd4e4": Phase="Pending", Reason="", readiness=false. Elapsed: 275.168548ms Aug 26 16:18:42.884: INFO: Pod "pod-secrets-2f7d3d71-9cba-4878-af55-710d069fd4e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278260992s Aug 26 16:18:45.248: INFO: Pod "pod-secrets-2f7d3d71-9cba-4878-af55-710d069fd4e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.642567619s Aug 26 16:18:47.551: INFO: Pod "pod-secrets-2f7d3d71-9cba-4878-af55-710d069fd4e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.946124057s STEP: Saw pod success Aug 26 16:18:47.551: INFO: Pod "pod-secrets-2f7d3d71-9cba-4878-af55-710d069fd4e4" satisfied condition "Succeeded or Failed" Aug 26 16:18:47.554: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-2f7d3d71-9cba-4878-af55-710d069fd4e4 container secret-env-test: STEP: delete the pod Aug 26 16:18:48.127: INFO: Waiting for pod pod-secrets-2f7d3d71-9cba-4878-af55-710d069fd4e4 to disappear Aug 26 16:18:48.160: INFO: Pod pod-secrets-2f7d3d71-9cba-4878-af55-710d069fd4e4 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:18:48.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7758" for this suite. • [SLOW TEST:8.750 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":104,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:18:48.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 26 16:18:54.159: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 26 16:18:56.816: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055533, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 16:19:00.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055533, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 16:19:01.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055533, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 16:19:02.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055533, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 16:19:04.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055533, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 16:19:06.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055534, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055533, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 26 16:19:10.023: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Aug 26 16:19:10.041: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:19:10.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2739" for this suite. STEP: Destroying namespace "webhook-2739-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:22.427 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":9,"skipped":143,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:19:10.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 26 16:19:20.194: INFO: Successfully updated pod "pod-update-fb98de75-6508-4916-b6e6-8b647bab0d38" STEP: verifying the updated pod is in kubernetes Aug 26 16:19:20.510: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:19:20.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4169" for this suite. • [SLOW TEST:10.114 seconds] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be updated [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":162,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:19:20.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 26 16:19:23.312: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 26 16:19:25.322: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055563, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055563, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055563, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055563, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 16:19:27.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055563, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055563, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055563, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055563, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 16:19:29.685: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055563, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055563, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055563, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055563, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 26 16:19:32.601: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 26 16:19:33.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6245-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:19:35.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3651" for this suite. STEP: Destroying namespace "webhook-3651-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.734 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":11,"skipped":168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:19:35.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Aug 26 16:19:35.495: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix206560870/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:19:35.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7224" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":12,"skipped":192,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:19:35.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 26 16:19:36.218: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 26 16:19:41.531: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:19:41.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3799" for this suite. • [SLOW TEST:6.506 seconds] [sig-apps] ReplicationController /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":13,"skipped":209,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:19:42.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Aug 26 16:19:51.594: INFO: Pod pod-hostip-df0645c1-4d67-4f71-9207-d54754593eec has hostIP: 172.18.0.13 [AfterEach] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:19:51.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7997" for this suite. • [SLOW TEST:9.571 seconds] [k8s.io] Pods /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:19:51.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6237, will wait for the garbage collector to delete the pods Aug 26 16:20:04.287: INFO: Deleting Job.batch foo took: 3.809075ms Aug 26 16:20:04.687: INFO: Terminating Job.batch foo pods took: 400.221892ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:20:49.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6237" for this suite. • [SLOW TEST:57.865 seconds] [sig-apps] Job /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":15,"skipped":244,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:20:49.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Aug 26 16:20:50.623: INFO: PodSpec: initContainers in spec.initContainers Aug 26 16:22:00.793: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-55be5d4e-808d-410a-954b-bbf99cb3f09a", GenerateName:"", Namespace:"init-container-5993", SelfLink:"/api/v1/namespaces/init-container-5993/pods/pod-init-55be5d4e-808d-410a-954b-bbf99cb3f09a", UID:"a2c3eea2-af2f-472a-bdf1-5446cfe04546", ResourceVersion:"1091375", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734055650, loc:(*time.Location)(0x7b565c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"623492732"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002ca5180), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002ca51a0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002ca51c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002ca51e0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-sxfn2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00292b380), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sxfn2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sxfn2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sxfn2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0008a5568), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0030147e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0008a5830)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0008a58d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0008a58d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0008a58dc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055651, loc:(*time.Location)(0x7b565c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055651, loc:(*time.Location)(0x7b565c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055651, loc:(*time.Location)(0x7b565c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055650, loc:(*time.Location)(0x7b565c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.15", PodIP:"10.244.1.144", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.144"}}, StartTime:(*v1.Time)(0xc002ca5200), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0030148c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003014930)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://a9d18b06605e78491164fa20d2c4e5bc87ecb1cbd4b54a2979c410bf5ee178f3", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ca5240), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ca5220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0008a59df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:22:00.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5993" for this suite. • [SLOW TEST:71.305 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":16,"skipped":259,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:22:00.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 26 16:22:01.333: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 26 16:22:03.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055721, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055721, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055721, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055721, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 26 16:22:05.346: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055721, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055721, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055721, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055721, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 26 16:22:08.426: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:22:09.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7815" for this suite. STEP: Destroying namespace "webhook-7815-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.298 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":17,"skipped":270,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:22:09.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-cxlv STEP: Creating a pod to test atomic-volume-subpath Aug 26 16:22:09.371: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-cxlv" in namespace "subpath-2396" to be "Succeeded or Failed" Aug 26 16:22:09.418: INFO: Pod "pod-subpath-test-projected-cxlv": Phase="Pending", Reason="", readiness=false. Elapsed: 47.15812ms Aug 26 16:22:11.422: INFO: Pod "pod-subpath-test-projected-cxlv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051006058s Aug 26 16:22:13.426: INFO: Pod "pod-subpath-test-projected-cxlv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054542652s Aug 26 16:22:15.428: INFO: Pod "pod-subpath-test-projected-cxlv": Phase="Running", Reason="", readiness=true. Elapsed: 6.057311753s Aug 26 16:22:17.432: INFO: Pod "pod-subpath-test-projected-cxlv": Phase="Running", Reason="", readiness=true. Elapsed: 8.060340119s Aug 26 16:22:19.435: INFO: Pod "pod-subpath-test-projected-cxlv": Phase="Running", Reason="", readiness=true. Elapsed: 10.063877318s Aug 26 16:22:22.214: INFO: Pod "pod-subpath-test-projected-cxlv": Phase="Running", Reason="", readiness=true. Elapsed: 12.84319101s Aug 26 16:22:24.892: INFO: Pod "pod-subpath-test-projected-cxlv": Phase="Running", Reason="", readiness=true. Elapsed: 15.521224885s Aug 26 16:22:27.364: INFO: Pod "pod-subpath-test-projected-cxlv": Phase="Running", Reason="", readiness=true. Elapsed: 17.992808836s Aug 26 16:22:29.546: INFO: Pod "pod-subpath-test-projected-cxlv": Phase="Running", Reason="", readiness=true. Elapsed: 20.174612658s Aug 26 16:22:31.550: INFO: Pod "pod-subpath-test-projected-cxlv": Phase="Running", Reason="", readiness=true. Elapsed: 22.17856257s Aug 26 16:22:33.554: INFO: Pod "pod-subpath-test-projected-cxlv": Phase="Running", Reason="", readiness=true. Elapsed: 24.182833757s Aug 26 16:22:35.558: INFO: Pod "pod-subpath-test-projected-cxlv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.186838948s STEP: Saw pod success Aug 26 16:22:35.558: INFO: Pod "pod-subpath-test-projected-cxlv" satisfied condition "Succeeded or Failed" Aug 26 16:22:35.561: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-projected-cxlv container test-container-subpath-projected-cxlv: STEP: delete the pod Aug 26 16:22:35.642: INFO: Waiting for pod pod-subpath-test-projected-cxlv to disappear Aug 26 16:22:35.729: INFO: Pod pod-subpath-test-projected-cxlv no longer exists STEP: Deleting pod pod-subpath-test-projected-cxlv Aug 26 16:22:35.729: INFO: Deleting pod "pod-subpath-test-projected-cxlv" in namespace "subpath-2396" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:22:35.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2396" for this suite. • [SLOW TEST:26.611 seconds] [sig-storage] Subpath /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":18,"skipped":271,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:22:35.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 26 16:22:35.845: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47a8a199-08fc-49f2-9463-0eff5e218257" in namespace "projected-3443" to be "Succeeded or Failed" Aug 26 16:22:35.864: INFO: Pod "downwardapi-volume-47a8a199-08fc-49f2-9463-0eff5e218257": Phase="Pending", Reason="", readiness=false. Elapsed: 18.711495ms Aug 26 16:22:37.867: INFO: Pod "downwardapi-volume-47a8a199-08fc-49f2-9463-0eff5e218257": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022498101s Aug 26 16:22:39.921: INFO: Pod "downwardapi-volume-47a8a199-08fc-49f2-9463-0eff5e218257": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076009874s Aug 26 16:22:41.939: INFO: Pod "downwardapi-volume-47a8a199-08fc-49f2-9463-0eff5e218257": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094055228s STEP: Saw pod success Aug 26 16:22:41.939: INFO: Pod "downwardapi-volume-47a8a199-08fc-49f2-9463-0eff5e218257" satisfied condition "Succeeded or Failed" Aug 26 16:22:41.941: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-47a8a199-08fc-49f2-9463-0eff5e218257 container client-container: STEP: delete the pod Aug 26 16:22:41.973: INFO: Waiting for pod downwardapi-volume-47a8a199-08fc-49f2-9463-0eff5e218257 to disappear Aug 26 16:22:41.987: INFO: Pod downwardapi-volume-47a8a199-08fc-49f2-9463-0eff5e218257 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:22:41.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3443" for this suite. • [SLOW TEST:6.252 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":313,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:22:41.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 26 16:22:42.436: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8377beb7-1fee-4fb3-a086-47fecd6e9b35" in namespace "downward-api-1567" to be "Succeeded or Failed" Aug 26 16:22:42.499: INFO: Pod "downwardapi-volume-8377beb7-1fee-4fb3-a086-47fecd6e9b35": Phase="Pending", Reason="", readiness=false. Elapsed: 62.827599ms Aug 26 16:22:44.503: INFO: Pod "downwardapi-volume-8377beb7-1fee-4fb3-a086-47fecd6e9b35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067362754s Aug 26 16:22:46.508: INFO: Pod "downwardapi-volume-8377beb7-1fee-4fb3-a086-47fecd6e9b35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071808695s STEP: Saw pod success Aug 26 16:22:46.508: INFO: Pod "downwardapi-volume-8377beb7-1fee-4fb3-a086-47fecd6e9b35" satisfied condition "Succeeded or Failed" Aug 26 16:22:46.511: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-8377beb7-1fee-4fb3-a086-47fecd6e9b35 container client-container: STEP: delete the pod Aug 26 16:22:46.583: INFO: Waiting for pod downwardapi-volume-8377beb7-1fee-4fb3-a086-47fecd6e9b35 to disappear Aug 26 16:22:46.609: INFO: Pod downwardapi-volume-8377beb7-1fee-4fb3-a086-47fecd6e9b35 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:22:46.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1567" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:22:46.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Aug 26 16:22:46.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7562' Aug 26 16:23:02.418: INFO: stderr: "" Aug 26 16:23:02.418: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 26 16:23:02.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7562' Aug 26 16:23:02.794: INFO: stderr: "" Aug 26 16:23:02.794: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Aug 26 16:23:07.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7562' Aug 26 16:23:07.901: INFO: stderr: "" Aug 26 16:23:07.901: INFO: stdout: "update-demo-nautilus-242vd update-demo-nautilus-sgvk6 " Aug 26 16:23:07.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-242vd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7562' Aug 26 16:23:08.010: INFO: stderr: "" Aug 26 16:23:08.011: INFO: stdout: "" Aug 26 16:23:08.011: INFO: update-demo-nautilus-242vd is created but not running Aug 26 16:23:13.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7562' Aug 26 16:23:13.187: INFO: stderr: "" Aug 26 16:23:13.187: INFO: stdout: "update-demo-nautilus-242vd update-demo-nautilus-sgvk6 " Aug 26 16:23:13.188: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-242vd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7562' Aug 26 16:23:13.284: INFO: stderr: "" Aug 26 16:23:13.284: INFO: stdout: "true" Aug 26 16:23:13.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-242vd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7562' Aug 26 16:23:13.377: INFO: stderr: "" Aug 26 16:23:13.377: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 26 16:23:13.377: INFO: validating pod update-demo-nautilus-242vd Aug 26 16:23:13.380: INFO: got data: { "image": "nautilus.jpg" } Aug 26 16:23:13.380: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 26 16:23:13.380: INFO: update-demo-nautilus-242vd is verified up and running Aug 26 16:23:13.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sgvk6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7562' Aug 26 16:23:13.473: INFO: stderr: "" Aug 26 16:23:13.473: INFO: stdout: "true" Aug 26 16:23:13.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sgvk6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7562' Aug 26 16:23:14.537: INFO: stderr: "" Aug 26 16:23:14.537: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 26 16:23:14.537: INFO: validating pod update-demo-nautilus-sgvk6 Aug 26 16:23:14.909: INFO: got data: { "image": "nautilus.jpg" } Aug 26 16:23:14.909: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 26 16:23:14.909: INFO: update-demo-nautilus-sgvk6 is verified up and running STEP: scaling down the replication controller Aug 26 16:23:14.911: INFO: scanned /root for discovery docs: Aug 26 16:23:14.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7562' Aug 26 16:23:16.366: INFO: stderr: "" Aug 26 16:23:16.366: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 26 16:23:16.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7562' Aug 26 16:23:16.472: INFO: stderr: "" Aug 26 16:23:16.472: INFO: stdout: "update-demo-nautilus-242vd update-demo-nautilus-sgvk6 " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 26 16:23:21.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7562' Aug 26 16:23:21.837: INFO: stderr: "" Aug 26 16:23:21.837: INFO: stdout: "update-demo-nautilus-242vd update-demo-nautilus-sgvk6 " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 26 16:23:26.837: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7562' Aug 26 16:23:26.938: INFO: stderr: "" Aug 26 16:23:26.938: INFO: stdout: "update-demo-nautilus-242vd update-demo-nautilus-sgvk6 " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 26 16:23:31.938: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7562' Aug 26 16:23:32.041: INFO: stderr: "" Aug 26 16:23:32.041: INFO: stdout: "update-demo-nautilus-sgvk6 " Aug 26 16:23:32.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sgvk6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7562' Aug 26 16:23:32.120: INFO: stderr: "" Aug 26 16:23:32.120: INFO: stdout: "true" Aug 26 16:23:32.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sgvk6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7562' Aug 26 16:23:32.219: INFO: stderr: "" Aug 26 16:23:32.219: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 26 16:23:32.219: INFO: validating pod update-demo-nautilus-sgvk6 Aug 26 16:23:32.281: INFO: got data: { "image": "nautilus.jpg" } Aug 26 16:23:32.281: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 26 16:23:32.281: INFO: update-demo-nautilus-sgvk6 is verified up and running STEP: scaling up the replication controller Aug 26 16:23:32.283: INFO: scanned /root for discovery docs: Aug 26 16:23:32.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7562' Aug 26 16:23:33.549: INFO: stderr: "" Aug 26 16:23:33.549: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 26 16:23:33.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7562' Aug 26 16:23:33.645: INFO: stderr: "" Aug 26 16:23:33.645: INFO: stdout: "update-demo-nautilus-8c9g8 update-demo-nautilus-sgvk6 " Aug 26 16:23:33.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8c9g8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7562' Aug 26 16:23:33.743: INFO: stderr: "" Aug 26 16:23:33.743: INFO: stdout: "" Aug 26 16:23:33.743: INFO: update-demo-nautilus-8c9g8 is created but not running Aug 26 16:23:38.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7562' Aug 26 16:23:38.852: INFO: stderr: "" Aug 26 16:23:38.852: INFO: stdout: "update-demo-nautilus-8c9g8 update-demo-nautilus-sgvk6 " Aug 26 16:23:38.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8c9g8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7562' Aug 26 16:23:39.146: INFO: stderr: "" Aug 26 16:23:39.146: INFO: stdout: "true" Aug 26 16:23:39.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8c9g8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7562' Aug 26 16:23:39.863: INFO: stderr: "" Aug 26 16:23:39.863: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 26 16:23:39.863: INFO: validating pod update-demo-nautilus-8c9g8 Aug 26 16:23:40.049: INFO: got data: { "image": "nautilus.jpg" } Aug 26 16:23:40.049: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 26 16:23:40.049: INFO: update-demo-nautilus-8c9g8 is verified up and running Aug 26 16:23:40.049: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sgvk6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7562' Aug 26 16:23:40.139: INFO: stderr: "" Aug 26 16:23:40.139: INFO: stdout: "true" Aug 26 16:23:40.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sgvk6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7562' Aug 26 16:23:40.317: INFO: stderr: "" Aug 26 16:23:40.317: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 26 16:23:40.317: INFO: validating pod update-demo-nautilus-sgvk6 Aug 26 16:23:40.376: INFO: got data: { "image": "nautilus.jpg" } Aug 26 16:23:40.376: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 26 16:23:40.376: INFO: update-demo-nautilus-sgvk6 is verified up and running STEP: using delete to clean up resources Aug 26 16:23:40.377: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7562' Aug 26 16:23:40.582: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 26 16:23:40.582: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 26 16:23:40.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7562' Aug 26 16:23:40.962: INFO: stderr: "No resources found in kubectl-7562 namespace.\n" Aug 26 16:23:40.962: INFO: stdout: "" Aug 26 16:23:40.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7562 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 26 16:23:41.714: INFO: stderr: "" Aug 26 16:23:41.714: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:23:41.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7562" for this suite. • [SLOW TEST:55.575 seconds] [sig-cli] Kubectl client /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":21,"skipped":383,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:23:42.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 26 16:23:44.820: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 26 16:23:50.114: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 26 16:23:58.246: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Aug 26 16:23:59.287: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4466 /apis/apps/v1/namespaces/deployment-4466/deployments/test-cleanup-deployment 0b483ea9-2fdb-462f-a83d-0e9a1d8eecb8 1092018 1 2020-08-26 16:23:58 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-08-26 16:23:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e941a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Aug 26 16:23:59.290: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f deployment-4466 /apis/apps/v1/namespaces/deployment-4466/replicasets/test-cleanup-deployment-b4867b47f d5658181-c7ee-4ad6-91b2-496285e6ec88 1092020 1 2020-08-26 16:23:58 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 0b483ea9-2fdb-462f-a83d-0e9a1d8eecb8 0xc002e949f0 0xc002e949f1}] [] [{kube-controller-manager Update apps/v1 2020-08-26 16:23:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 98 52 56 51 101 97 57 45 50 102 100 98 45 52 54 50 102 45 97 56 51 100 45 48 101 57 97 49 100 56 101 101 99 98 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e94a68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 26 16:23:59.290: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Aug 26 16:23:59.291: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-4466 /apis/apps/v1/namespaces/deployment-4466/replicasets/test-cleanup-controller aa5e9519-0739-4e3e-a3f4-92740b2f95ac 1092019 1 2020-08-26 16:23:44 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 0b483ea9-2fdb-462f-a83d-0e9a1d8eecb8 0xc002e9475f 0xc002e948f0}] [] [{e2e.test Update apps/v1 2020-08-26 16:23:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-26 16:23:58 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 48 98 52 56 51 101 97 57 45 50 102 100 98 45 52 54 50 102 45 97 56 51 100 45 48 101 57 97 49 100 56 101 101 99 98 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002e94988 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 26 16:23:59.469: INFO: Pod "test-cleanup-controller-z2sqk" is available: &Pod{ObjectMeta:{test-cleanup-controller-z2sqk test-cleanup-controller- deployment-4466 /api/v1/namespaces/deployment-4466/pods/test-cleanup-controller-z2sqk 1dd30bfc-35fc-477d-9b87-b77f361d9b28 1092014 0 2020-08-26 16:23:44 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller aa5e9519-0739-4e3e-a3f4-92740b2f95ac 0xc0008b05b7 0xc0008b05b8}] [] [{kube-controller-manager Update v1 2020-08-26 16:23:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 97 53 101 57 53 49 57 45 48 55 51 57 45 52 101 51 101 45 97 51 102 52 45 57 50 55 52 48 98 50 102 57 53 97 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 16:23:56 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 49 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9lnx7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9lnx7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9lnx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 16:23:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 16:23:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 16:23:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 16:23:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.214,StartTime:2020-08-26 16:23:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 16:23:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6488801c6c52de7b2d8c50951f1048bd8f623a0efffba4ad4976a6ecf79f9f4d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.214,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 26 16:23:59.469: INFO: Pod "test-cleanup-deployment-b4867b47f-7vlq2" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-7vlq2 test-cleanup-deployment-b4867b47f- deployment-4466 /api/v1/namespaces/deployment-4466/pods/test-cleanup-deployment-b4867b47f-7vlq2 f4777582-35a8-40f2-9d2b-7cae7dea3a2b 1092023 0 2020-08-26 16:23:59 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f d5658181-c7ee-4ad6-91b2-496285e6ec88 0xc0008b0780 0xc0008b0781}] [] [{kube-controller-manager Update v1 2020-08-26 16:23:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 54 53 56 49 56 49 45 99 55 101 101 45 52 97 100 54 45 57 49 98 50 45 52 57 54 50 56 53 101 54 101 99 56 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9lnx7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9lnx7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9lnx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:23:59.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4466" for this suite. • [SLOW TEST:18.966 seconds] [sig-apps] Deployment /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":22,"skipped":390,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:24:01.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-560dd602-9cfa-4990-83f8-9b5d6ae9a89f in namespace container-probe-328 Aug 26 16:24:15.188: INFO: Started pod busybox-560dd602-9cfa-4990-83f8-9b5d6ae9a89f in namespace container-probe-328 STEP: checking the pod's current state and verifying that restartCount is present Aug 26 16:24:15.342: INFO: Initial restart count of pod busybox-560dd602-9cfa-4990-83f8-9b5d6ae9a89f is 0 Aug 26 16:25:07.210: INFO: Restart count of pod container-probe-328/busybox-560dd602-9cfa-4990-83f8-9b5d6ae9a89f is now 1 (51.867999699s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:25:07.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-328" for this suite. • [SLOW TEST:67.761 seconds] [k8s.io] Probing container /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":397,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:25:08.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted Aug 26 16:25:29.391: INFO: 5 pods remaining Aug 26 16:25:29.391: INFO: 5 pods has nil DeletionTimestamp Aug 26 16:25:29.391: INFO: STEP: Gathering metrics W0826 16:25:33.815188 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 26 16:25:33.815: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:25:33.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-266" for this suite. • [SLOW TEST:25.064 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":24,"skipped":418,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:25:33.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:25:35.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4296" for this suite. STEP: Destroying namespace "nspatchtest-ad48fd86-1ee1-4758-b084-a33fac52f67e-8062" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":25,"skipped":472,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:25:35.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2519 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-2519 Aug 26 16:25:37.978: INFO: Found 0 stateful pods, waiting for 1 Aug 26 16:25:47.982: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Aug 26 16:25:48.079: INFO: Deleting all statefulset in ns statefulset-2519 Aug 26 16:25:48.144: INFO: Scaling statefulset ss to 0 Aug 26 16:26:08.342: INFO: Waiting for statefulset status.replicas updated to 0 Aug 26 16:26:08.344: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 26 16:26:08.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2519" for this suite. • [SLOW TEST:32.678 seconds] [sig-apps] StatefulSet /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":26,"skipped":497,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 26 16:26:08.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 26 16:26:09.572: INFO: (0) /api/v1/nodes/kali-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-1054
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-1054
I0826 16:26:10.540242       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1054, replica count: 2
I0826 16:26:13.590631       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:26:16.590855       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:26:19.591042       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 26 16:26:19.591: INFO: Creating new exec pod
Aug 26 16:26:26.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-1054 execpodvvs7j -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 26 16:26:26.895: INFO: stderr: "I0826 16:26:26.823222     590 log.go:172] (0xc000a2ebb0) (0xc000aaa3c0) Create stream\nI0826 16:26:26.823273     590 log.go:172] (0xc000a2ebb0) (0xc000aaa3c0) Stream added, broadcasting: 1\nI0826 16:26:26.826042     590 log.go:172] (0xc000a2ebb0) Reply frame received for 1\nI0826 16:26:26.826105     590 log.go:172] (0xc000a2ebb0) (0xc0009e0000) Create stream\nI0826 16:26:26.826120     590 log.go:172] (0xc000a2ebb0) (0xc0009e0000) Stream added, broadcasting: 3\nI0826 16:26:26.827329     590 log.go:172] (0xc000a2ebb0) Reply frame received for 3\nI0826 16:26:26.827375     590 log.go:172] (0xc000a2ebb0) (0xc000682c80) Create stream\nI0826 16:26:26.827393     590 log.go:172] (0xc000a2ebb0) (0xc000682c80) Stream added, broadcasting: 5\nI0826 16:26:26.828287     590 log.go:172] (0xc000a2ebb0) Reply frame received for 5\nI0826 16:26:26.884821     590 log.go:172] (0xc000a2ebb0) Data frame received for 5\nI0826 16:26:26.884851     590 log.go:172] (0xc000682c80) (5) Data frame handling\nI0826 16:26:26.884864     590 log.go:172] (0xc000682c80) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0826 16:26:26.885510     590 log.go:172] (0xc000a2ebb0) Data frame received for 5\nI0826 16:26:26.885525     590 log.go:172] (0xc000682c80) (5) Data frame handling\nI0826 16:26:26.886382     590 log.go:172] (0xc000a2ebb0) Data frame received for 3\nI0826 16:26:26.886405     590 log.go:172] (0xc0009e0000) (3) Data frame handling\nI0826 16:26:26.887774     590 log.go:172] (0xc000a2ebb0) Data frame received for 1\nI0826 16:26:26.887789     590 log.go:172] (0xc000aaa3c0) (1) Data frame handling\nI0826 16:26:26.887797     590 log.go:172] (0xc000aaa3c0) (1) Data frame sent\nI0826 16:26:26.887813     590 log.go:172] (0xc000a2ebb0) (0xc000aaa3c0) Stream removed, broadcasting: 1\nI0826 16:26:26.887830     590 log.go:172] (0xc000a2ebb0) Go away received\nI0826 16:26:26.888095     590 log.go:172] (0xc000a2ebb0) (0xc000aaa3c0) Stream removed, broadcasting: 1\nI0826 16:26:26.888109     590 log.go:172] (0xc000a2ebb0) (0xc0009e0000) Stream removed, broadcasting: 3\nI0826 16:26:26.888118     590 log.go:172] (0xc000a2ebb0) (0xc000682c80) Stream removed, broadcasting: 5\n"
Aug 26 16:26:26.895: INFO: stdout: ""
Aug 26 16:26:26.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-1054 execpodvvs7j -- /bin/sh -x -c nc -zv -t -w 2 10.99.19.61 80'
Aug 26 16:26:27.076: INFO: stderr: "I0826 16:26:27.005457     611 log.go:172] (0xc0005d3ce0) (0xc0006c5680) Create stream\nI0826 16:26:27.005500     611 log.go:172] (0xc0005d3ce0) (0xc0006c5680) Stream added, broadcasting: 1\nI0826 16:26:27.007961     611 log.go:172] (0xc0005d3ce0) Reply frame received for 1\nI0826 16:26:27.007999     611 log.go:172] (0xc0005d3ce0) (0xc000a1c000) Create stream\nI0826 16:26:27.008015     611 log.go:172] (0xc0005d3ce0) (0xc000a1c000) Stream added, broadcasting: 3\nI0826 16:26:27.008920     611 log.go:172] (0xc0005d3ce0) Reply frame received for 3\nI0826 16:26:27.008958     611 log.go:172] (0xc0005d3ce0) (0xc000444aa0) Create stream\nI0826 16:26:27.008989     611 log.go:172] (0xc0005d3ce0) (0xc000444aa0) Stream added, broadcasting: 5\nI0826 16:26:27.009912     611 log.go:172] (0xc0005d3ce0) Reply frame received for 5\nI0826 16:26:27.066668     611 log.go:172] (0xc0005d3ce0) Data frame received for 3\nI0826 16:26:27.066698     611 log.go:172] (0xc0005d3ce0) Data frame received for 5\nI0826 16:26:27.066721     611 log.go:172] (0xc000a1c000) (3) Data frame handling\nI0826 16:26:27.066745     611 log.go:172] (0xc000444aa0) (5) Data frame handling\nI0826 16:26:27.066762     611 log.go:172] (0xc000444aa0) (5) Data frame sent\nI0826 16:26:27.066767     611 log.go:172] (0xc0005d3ce0) Data frame received for 5\nI0826 16:26:27.066774     611 log.go:172] (0xc000444aa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.19.61 80\nConnection to 10.99.19.61 80 port [tcp/http] succeeded!\nI0826 16:26:27.067712     611 log.go:172] (0xc0005d3ce0) Data frame received for 1\nI0826 16:26:27.067721     611 log.go:172] (0xc0006c5680) (1) Data frame handling\nI0826 16:26:27.067727     611 log.go:172] (0xc0006c5680) (1) Data frame sent\nI0826 16:26:27.067835     611 log.go:172] (0xc0005d3ce0) (0xc0006c5680) Stream removed, broadcasting: 1\nI0826 16:26:27.068133     611 log.go:172] (0xc0005d3ce0) (0xc0006c5680) Stream removed, broadcasting: 1\nI0826 16:26:27.068148     611 log.go:172] (0xc0005d3ce0) (0xc000a1c000) Stream removed, broadcasting: 3\nI0826 16:26:27.068156     611 log.go:172] (0xc0005d3ce0) (0xc000444aa0) Stream removed, broadcasting: 5\n"
Aug 26 16:26:27.077: INFO: stdout: ""
Aug 26 16:26:27.077: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:26:27.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1054" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:17.915 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":28,"skipped":521,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:26:27.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 16:26:32.060: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 16:26:35.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055992, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055992, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055992, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055991, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:26:37.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055992, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055992, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055992, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055991, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:26:39.031: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055992, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055992, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055992, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734055991, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 16:26:42.158: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:26:43.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5653" for this suite.
STEP: Destroying namespace "webhook-5653-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.826 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":29,"skipped":527,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:26:44.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:27:08.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4540" for this suite.

• [SLOW TEST:25.714 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":30,"skipped":535,"failed":0}
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:27:10.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:27:29.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5358" for this suite.
STEP: Destroying namespace "nsdeletetest-4638" for this suite.
Aug 26 16:27:29.822: INFO: Namespace nsdeletetest-4638 was already deleted
STEP: Destroying namespace "nsdeletetest-2190" for this suite.

• [SLOW TEST:19.744 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":31,"skipped":537,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:27:29.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-qxfh
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 16:27:30.401: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qxfh" in namespace "subpath-2059" to be "Succeeded or Failed"
Aug 26 16:27:30.426: INFO: Pod "pod-subpath-test-configmap-qxfh": Phase="Pending", Reason="", readiness=false. Elapsed: 24.87565ms
Aug 26 16:27:32.956: INFO: Pod "pod-subpath-test-configmap-qxfh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.555350866s
Aug 26 16:27:35.285: INFO: Pod "pod-subpath-test-configmap-qxfh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.884053251s
Aug 26 16:27:37.594: INFO: Pod "pod-subpath-test-configmap-qxfh": Phase="Pending", Reason="", readiness=false. Elapsed: 7.193540606s
Aug 26 16:27:39.598: INFO: Pod "pod-subpath-test-configmap-qxfh": Phase="Running", Reason="", readiness=true. Elapsed: 9.197714944s
Aug 26 16:27:41.603: INFO: Pod "pod-subpath-test-configmap-qxfh": Phase="Running", Reason="", readiness=true. Elapsed: 11.201803716s
Aug 26 16:27:43.605: INFO: Pod "pod-subpath-test-configmap-qxfh": Phase="Running", Reason="", readiness=true. Elapsed: 13.204638759s
Aug 26 16:27:45.734: INFO: Pod "pod-subpath-test-configmap-qxfh": Phase="Running", Reason="", readiness=true. Elapsed: 15.332852469s
Aug 26 16:27:47.782: INFO: Pod "pod-subpath-test-configmap-qxfh": Phase="Running", Reason="", readiness=true. Elapsed: 17.381549016s
Aug 26 16:27:50.062: INFO: Pod "pod-subpath-test-configmap-qxfh": Phase="Running", Reason="", readiness=true. Elapsed: 19.661667687s
Aug 26 16:27:52.066: INFO: Pod "pod-subpath-test-configmap-qxfh": Phase="Running", Reason="", readiness=true. Elapsed: 21.665190317s
Aug 26 16:27:54.427: INFO: Pod "pod-subpath-test-configmap-qxfh": Phase="Running", Reason="", readiness=true. Elapsed: 24.026024524s
Aug 26 16:27:56.759: INFO: Pod "pod-subpath-test-configmap-qxfh": Phase="Running", Reason="", readiness=true. Elapsed: 26.3577991s
Aug 26 16:27:58.878: INFO: Pod "pod-subpath-test-configmap-qxfh": Phase="Running", Reason="", readiness=true. Elapsed: 28.477378702s
Aug 26 16:28:00.883: INFO: Pod "pod-subpath-test-configmap-qxfh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.482193891s
STEP: Saw pod success
Aug 26 16:28:00.883: INFO: Pod "pod-subpath-test-configmap-qxfh" satisfied condition "Succeeded or Failed"
Aug 26 16:28:00.886: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-qxfh container test-container-subpath-configmap-qxfh: 
STEP: delete the pod
Aug 26 16:28:01.593: INFO: Waiting for pod pod-subpath-test-configmap-qxfh to disappear
Aug 26 16:28:01.609: INFO: Pod pod-subpath-test-configmap-qxfh no longer exists
STEP: Deleting pod pod-subpath-test-configmap-qxfh
Aug 26 16:28:01.609: INFO: Deleting pod "pod-subpath-test-configmap-qxfh" in namespace "subpath-2059"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:28:01.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2059" for this suite.

• [SLOW TEST:31.814 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":32,"skipped":575,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:28:01.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0826 16:28:06.640222       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 16:28:06.640: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:28:06.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6647" for this suite.

• [SLOW TEST:5.009 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":33,"skipped":600,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:28:06.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:28:09.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2041" for this suite.

• [SLOW TEST:5.246 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":34,"skipped":608,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:28:11.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 26 16:28:16.978: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-3232 /api/v1/namespaces/watch-3232/configmaps/e2e-watch-test-resource-version ca597898-ecad-4e14-ad04-9f46b81f88d7 1093476 0 2020-08-26 16:28:14 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-08-26 16:28:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 26 16:28:16.978: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-3232 /api/v1/namespaces/watch-3232/configmaps/e2e-watch-test-resource-version ca597898-ecad-4e14-ad04-9f46b81f88d7 1093481 0 2020-08-26 16:28:14 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-08-26 16:28:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:28:16.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3232" for this suite.

• [SLOW TEST:5.881 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":35,"skipped":641,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:28:17.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 26 16:28:18.533: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b463569-0d8d-414e-aa07-57df51418143" in namespace "projected-1363" to be "Succeeded or Failed"
Aug 26 16:28:18.715: INFO: Pod "downwardapi-volume-6b463569-0d8d-414e-aa07-57df51418143": Phase="Pending", Reason="", readiness=false. Elapsed: 182.545287ms
Aug 26 16:28:20.719: INFO: Pod "downwardapi-volume-6b463569-0d8d-414e-aa07-57df51418143": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186155426s
Aug 26 16:28:23.688: INFO: Pod "downwardapi-volume-6b463569-0d8d-414e-aa07-57df51418143": Phase="Pending", Reason="", readiness=false. Elapsed: 5.154934545s
Aug 26 16:28:26.072: INFO: Pod "downwardapi-volume-6b463569-0d8d-414e-aa07-57df51418143": Phase="Pending", Reason="", readiness=false. Elapsed: 7.539169754s
Aug 26 16:28:29.150: INFO: Pod "downwardapi-volume-6b463569-0d8d-414e-aa07-57df51418143": Phase="Running", Reason="", readiness=true. Elapsed: 10.6175822s
Aug 26 16:28:31.473: INFO: Pod "downwardapi-volume-6b463569-0d8d-414e-aa07-57df51418143": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.939915657s
STEP: Saw pod success
Aug 26 16:28:31.473: INFO: Pod "downwardapi-volume-6b463569-0d8d-414e-aa07-57df51418143" satisfied condition "Succeeded or Failed"
Aug 26 16:28:31.477: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-6b463569-0d8d-414e-aa07-57df51418143 container client-container: 
STEP: delete the pod
Aug 26 16:28:32.357: INFO: Waiting for pod downwardapi-volume-6b463569-0d8d-414e-aa07-57df51418143 to disappear
Aug 26 16:28:32.392: INFO: Pod downwardapi-volume-6b463569-0d8d-414e-aa07-57df51418143 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:28:32.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1363" for this suite.

• [SLOW TEST:14.795 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":36,"skipped":654,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:28:32.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 16:28:34.117: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 16:28:36.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056114, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056114, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056114, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056113, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:28:38.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056114, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056114, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056114, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056113, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:28:40.598: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056114, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056114, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056114, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056113, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:28:42.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056114, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056114, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056114, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056113, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 16:28:45.991: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:28:47.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9311" for this suite.
STEP: Destroying namespace "webhook-9311-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.947 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":37,"skipped":674,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:28:49.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 26 16:29:16.758: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 26 16:29:17.715: INFO: Pod pod-with-prestop-http-hook still exists
Aug 26 16:29:19.716: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 26 16:29:19.962: INFO: Pod pod-with-prestop-http-hook still exists
Aug 26 16:29:21.716: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 26 16:29:21.904: INFO: Pod pod-with-prestop-http-hook still exists
Aug 26 16:29:23.716: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 26 16:29:23.790: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:29:23.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5240" for this suite.

• [SLOW TEST:34.802 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":38,"skipped":686,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:29:24.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:29:27.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1673" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":39,"skipped":706,"failed":0}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:29:27.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 16:29:35.021: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 16:29:37.403: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056175, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056175, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056176, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056173, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:29:40.100: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056175, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056175, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056176, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056173, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:29:41.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056175, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056175, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056176, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056173, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:29:43.509: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056175, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056175, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056176, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056173, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:29:45.595: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056175, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056175, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056176, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056173, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:29:47.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056175, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056175, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056176, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056173, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 16:29:51.655: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:29:55.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7794" for this suite.
STEP: Destroying namespace "webhook-7794-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:28.903 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":40,"skipped":706,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:29:56.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 26 16:29:57.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0aab63e9-c634-495f-953c-62329cee02e1" in namespace "downward-api-5484" to be "Succeeded or Failed"
Aug 26 16:29:57.599: INFO: Pod "downwardapi-volume-0aab63e9-c634-495f-953c-62329cee02e1": Phase="Pending", Reason="", readiness=false. Elapsed: 437.630833ms
Aug 26 16:30:00.085: INFO: Pod "downwardapi-volume-0aab63e9-c634-495f-953c-62329cee02e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.923417848s
Aug 26 16:30:02.215: INFO: Pod "downwardapi-volume-0aab63e9-c634-495f-953c-62329cee02e1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.053668571s
Aug 26 16:30:04.267: INFO: Pod "downwardapi-volume-0aab63e9-c634-495f-953c-62329cee02e1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.105568214s
Aug 26 16:30:06.933: INFO: Pod "downwardapi-volume-0aab63e9-c634-495f-953c-62329cee02e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.771597593s
STEP: Saw pod success
Aug 26 16:30:06.933: INFO: Pod "downwardapi-volume-0aab63e9-c634-495f-953c-62329cee02e1" satisfied condition "Succeeded or Failed"
Aug 26 16:30:07.396: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-0aab63e9-c634-495f-953c-62329cee02e1 container client-container: 
STEP: delete the pod
Aug 26 16:30:08.035: INFO: Waiting for pod downwardapi-volume-0aab63e9-c634-495f-953c-62329cee02e1 to disappear
Aug 26 16:30:08.090: INFO: Pod downwardapi-volume-0aab63e9-c634-495f-953c-62329cee02e1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:30:08.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5484" for this suite.

• [SLOW TEST:12.065 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":720,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:30:08.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:30:28.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-907" for this suite.

• [SLOW TEST:20.874 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":42,"skipped":723,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:30:28.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2096.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2096.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 16:30:57.241: INFO: DNS probes using dns-2096/dns-test-6fcde526-2614-471c-adf2-4abd1a339e2d succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:30:57.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2096" for this suite.

• [SLOW TEST:28.904 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":275,"completed":43,"skipped":728,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:30:57.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-df7dad12-de84-4360-a3e7-d75e2cdd4ef4
STEP: Creating a pod to test consume secrets
Aug 26 16:30:59.728: INFO: Waiting up to 5m0s for pod "pod-secrets-cb121ad3-14d6-47a8-8f9f-66e67f225c45" in namespace "secrets-2455" to be "Succeeded or Failed"
Aug 26 16:30:59.963: INFO: Pod "pod-secrets-cb121ad3-14d6-47a8-8f9f-66e67f225c45": Phase="Pending", Reason="", readiness=false. Elapsed: 234.67202ms
Aug 26 16:31:01.997: INFO: Pod "pod-secrets-cb121ad3-14d6-47a8-8f9f-66e67f225c45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268785197s
Aug 26 16:31:04.199: INFO: Pod "pod-secrets-cb121ad3-14d6-47a8-8f9f-66e67f225c45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471262858s
Aug 26 16:31:06.299: INFO: Pod "pod-secrets-cb121ad3-14d6-47a8-8f9f-66e67f225c45": Phase="Pending", Reason="", readiness=false. Elapsed: 6.570846169s
Aug 26 16:31:09.131: INFO: Pod "pod-secrets-cb121ad3-14d6-47a8-8f9f-66e67f225c45": Phase="Pending", Reason="", readiness=false. Elapsed: 9.402503349s
Aug 26 16:31:11.514: INFO: Pod "pod-secrets-cb121ad3-14d6-47a8-8f9f-66e67f225c45": Phase="Running", Reason="", readiness=true. Elapsed: 11.785769931s
Aug 26 16:31:13.517: INFO: Pod "pod-secrets-cb121ad3-14d6-47a8-8f9f-66e67f225c45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.789187041s
STEP: Saw pod success
Aug 26 16:31:13.517: INFO: Pod "pod-secrets-cb121ad3-14d6-47a8-8f9f-66e67f225c45" satisfied condition "Succeeded or Failed"
Aug 26 16:31:13.520: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-cb121ad3-14d6-47a8-8f9f-66e67f225c45 container secret-volume-test: 
STEP: delete the pod
Aug 26 16:31:13.694: INFO: Waiting for pod pod-secrets-cb121ad3-14d6-47a8-8f9f-66e67f225c45 to disappear
Aug 26 16:31:13.696: INFO: Pod pod-secrets-cb121ad3-14d6-47a8-8f9f-66e67f225c45 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:31:13.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2455" for this suite.
STEP: Destroying namespace "secret-namespace-7028" for this suite.

• [SLOW TEST:15.835 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":744,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:31:13.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:31:52.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5609" for this suite.
STEP: Destroying namespace "nsdeletetest-9387" for this suite.
Aug 26 16:31:52.916: INFO: Namespace nsdeletetest-9387 was already deleted
STEP: Destroying namespace "nsdeletetest-8422" for this suite.

• [SLOW TEST:39.274 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":45,"skipped":757,"failed":0}
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:31:52.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-3651
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3651
STEP: Creating statefulset with conflicting port in namespace statefulset-3651
STEP: Waiting until pod test-pod will start running in namespace statefulset-3651
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3651
Aug 26 16:32:07.960: INFO: Observed stateful pod in namespace: statefulset-3651, name: ss-0, uid: 74604f83-a428-4324-8152-1f591cccaacc, status phase: Pending. Waiting for statefulset controller to delete.
Aug 26 16:32:08.526: INFO: Observed stateful pod in namespace: statefulset-3651, name: ss-0, uid: 74604f83-a428-4324-8152-1f591cccaacc, status phase: Failed. Waiting for statefulset controller to delete.
Aug 26 16:32:09.100: INFO: Observed stateful pod in namespace: statefulset-3651, name: ss-0, uid: 74604f83-a428-4324-8152-1f591cccaacc, status phase: Failed. Waiting for statefulset controller to delete.
Aug 26 16:32:11.221: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3651
STEP: Removing pod with conflicting port in namespace statefulset-3651
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3651 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 26 16:32:28.171: INFO: Deleting all statefulset in ns statefulset-3651
Aug 26 16:32:28.176: INFO: Scaling statefulset ss to 0
Aug 26 16:32:39.141: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 16:32:39.144: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:32:39.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3651" for this suite.

• [SLOW TEST:46.766 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":46,"skipped":763,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:32:39.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-jf6f4 in namespace proxy-5655
I0826 16:32:43.415578       7 runners.go:190] Created replication controller with name: proxy-service-jf6f4, namespace: proxy-5655, replica count: 1
I0826 16:32:44.466131       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:32:45.466340       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:32:46.466551       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:32:47.469433       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:32:48.469664       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:32:49.469894       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:32:50.470129       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:32:51.470375       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:32:52.470617       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:32:53.470855       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:32:54.471048       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:32:55.471272       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 16:32:56.471478       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 16:32:57.471753       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 16:32:58.471949       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 16:32:59.472129       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 16:33:00.472320       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0826 16:33:01.472541       7 runners.go:190] proxy-service-jf6f4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 26 16:33:02.910: INFO: setup took 20.667891481s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 26 16:33:03.278: INFO: (0) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:1080/proxy/: test<... (200; 367.637712ms)
Aug 26 16:33:03.278: INFO: (0) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:1080/proxy/: ... (200; 368.019045ms)
Aug 26 16:33:03.289: INFO: (0) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm/proxy/: test (200; 378.573961ms)
Aug 26 16:33:03.290: INFO: (0) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 378.871005ms)
Aug 26 16:33:03.290: INFO: (0) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 379.247757ms)
Aug 26 16:33:03.290: INFO: (0) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 379.75023ms)
Aug 26 16:33:03.290: INFO: (0) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:462/proxy/: tls qux (200; 379.965813ms)
Aug 26 16:33:03.290: INFO: (0) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 379.717545ms)
Aug 26 16:33:03.291: INFO: (0) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 380.758054ms)
Aug 26 16:33:03.296: INFO: (0) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: test<... (200; 4.095861ms)
Aug 26 16:33:03.742: INFO: (1) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 4.860338ms)
Aug 26 16:33:03.744: INFO: (1) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 6.963924ms)
Aug 26 16:33:03.744: INFO: (1) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: ... (200; 7.359559ms)
Aug 26 16:33:03.745: INFO: (1) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname2/proxy/: bar (200; 7.510481ms)
Aug 26 16:33:03.745: INFO: (1) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 7.459164ms)
Aug 26 16:33:03.745: INFO: (1) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname1/proxy/: foo (200; 8.037948ms)
Aug 26 16:33:03.746: INFO: (1) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm/proxy/: test (200; 8.096263ms)
Aug 26 16:33:03.746: INFO: (1) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname2/proxy/: bar (200; 8.057821ms)
Aug 26 16:33:03.746: INFO: (1) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname1/proxy/: tls baz (200; 8.35326ms)
Aug 26 16:33:03.746: INFO: (1) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 8.281091ms)
Aug 26 16:33:03.746: INFO: (1) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname2/proxy/: tls qux (200; 8.313954ms)
Aug 26 16:33:03.749: INFO: (2) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:1080/proxy/: test<... (200; 2.701923ms)
Aug 26 16:33:03.749: INFO: (2) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:460/proxy/: tls baz (200; 2.918559ms)
Aug 26 16:33:03.749: INFO: (2) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm/proxy/: test (200; 2.925946ms)
Aug 26 16:33:03.751: INFO: (2) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname2/proxy/: tls qux (200; 4.601492ms)
Aug 26 16:33:03.751: INFO: (2) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname2/proxy/: bar (200; 5.007669ms)
Aug 26 16:33:03.751: INFO: (2) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname1/proxy/: tls baz (200; 4.967239ms)
Aug 26 16:33:03.751: INFO: (2) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 5.020838ms)
Aug 26 16:33:03.751: INFO: (2) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname2/proxy/: bar (200; 4.945693ms)
Aug 26 16:33:03.751: INFO: (2) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname1/proxy/: foo (200; 5.093095ms)
Aug 26 16:33:03.751: INFO: (2) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 5.467624ms)
Aug 26 16:33:03.752: INFO: (2) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 5.734278ms)
Aug 26 16:33:03.752: INFO: (2) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 5.850545ms)
Aug 26 16:33:03.752: INFO: (2) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 5.906381ms)
Aug 26 16:33:03.752: INFO: (2) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:1080/proxy/: ... (200; 5.976697ms)
Aug 26 16:33:03.752: INFO: (2) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:462/proxy/: tls qux (200; 5.956788ms)
Aug 26 16:33:03.752: INFO: (2) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: test (200; 5.174908ms)
Aug 26 16:33:03.757: INFO: (3) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:462/proxy/: tls qux (200; 4.95475ms)
Aug 26 16:33:03.757: INFO: (3) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:1080/proxy/: ... (200; 5.196568ms)
Aug 26 16:33:03.757: INFO: (3) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: test<... (200; 5.60537ms)
Aug 26 16:33:03.758: INFO: (3) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname1/proxy/: foo (200; 6.031224ms)
Aug 26 16:33:03.758: INFO: (3) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname2/proxy/: bar (200; 6.39778ms)
Aug 26 16:33:03.759: INFO: (3) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname2/proxy/: tls qux (200; 6.84878ms)
Aug 26 16:33:03.759: INFO: (3) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 6.769865ms)
Aug 26 16:33:03.764: INFO: (4) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm/proxy/: test (200; 4.56021ms)
Aug 26 16:33:03.764: INFO: (4) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 4.577661ms)
Aug 26 16:33:03.764: INFO: (4) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 4.847781ms)
Aug 26 16:33:03.765: INFO: (4) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname2/proxy/: bar (200; 5.468866ms)
Aug 26 16:33:03.765: INFO: (4) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname2/proxy/: bar (200; 5.459724ms)
Aug 26 16:33:03.765: INFO: (4) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname1/proxy/: foo (200; 5.562844ms)
Aug 26 16:33:03.765: INFO: (4) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 5.671521ms)
Aug 26 16:33:03.765: INFO: (4) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname1/proxy/: tls baz (200; 5.672772ms)
Aug 26 16:33:03.765: INFO: (4) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname2/proxy/: tls qux (200; 5.658966ms)
Aug 26 16:33:03.766: INFO: (4) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:462/proxy/: tls qux (200; 7.250363ms)
Aug 26 16:33:03.766: INFO: (4) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 7.331644ms)
Aug 26 16:33:03.766: INFO: (4) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:460/proxy/: tls baz (200; 7.244734ms)
Aug 26 16:33:03.766: INFO: (4) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:1080/proxy/: test<... (200; 7.207035ms)
Aug 26 16:33:03.766: INFO: (4) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: ... (200; 7.349447ms)
Aug 26 16:33:03.769: INFO: (5) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:1080/proxy/: test<... (200; 2.970113ms)
Aug 26 16:33:03.769: INFO: (5) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 3.002511ms)
Aug 26 16:33:03.769: INFO: (5) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 3.059535ms)
Aug 26 16:33:03.769: INFO: (5) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:1080/proxy/: ... (200; 2.966421ms)
Aug 26 16:33:03.769: INFO: (5) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: test (200; 4.913845ms)
Aug 26 16:33:03.771: INFO: (5) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 4.915565ms)
Aug 26 16:33:03.771: INFO: (5) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 4.962577ms)
Aug 26 16:33:03.771: INFO: (5) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname2/proxy/: bar (200; 4.975974ms)
Aug 26 16:33:03.771: INFO: (5) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname1/proxy/: tls baz (200; 5.02892ms)
Aug 26 16:33:03.778: INFO: (6) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 6.514556ms)
Aug 26 16:33:03.778: INFO: (6) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname2/proxy/: bar (200; 6.447583ms)
Aug 26 16:33:03.778: INFO: (6) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname2/proxy/: tls qux (200; 6.473577ms)
Aug 26 16:33:03.778: INFO: (6) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 6.59254ms)
Aug 26 16:33:03.778: INFO: (6) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: test<... (200; 7.11412ms)
Aug 26 16:33:03.779: INFO: (6) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:1080/proxy/: ... (200; 7.210743ms)
Aug 26 16:33:03.779: INFO: (6) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:460/proxy/: tls baz (200; 7.197192ms)
Aug 26 16:33:03.779: INFO: (6) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:462/proxy/: tls qux (200; 7.169477ms)
Aug 26 16:33:03.779: INFO: (6) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm/proxy/: test (200; 7.104326ms)
Aug 26 16:33:03.779: INFO: (6) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname1/proxy/: foo (200; 7.150141ms)
Aug 26 16:33:03.779: INFO: (6) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname2/proxy/: bar (200; 7.209988ms)
Aug 26 16:33:03.779: INFO: (6) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname1/proxy/: tls baz (200; 7.191921ms)
Aug 26 16:33:03.779: INFO: (6) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 7.270473ms)
Aug 26 16:33:03.779: INFO: (6) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 7.224818ms)
Aug 26 16:33:03.781: INFO: (7) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 2.18685ms)
Aug 26 16:33:03.783: INFO: (7) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 3.628231ms)
Aug 26 16:33:03.783: INFO: (7) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm/proxy/: test (200; 3.949965ms)
Aug 26 16:33:03.783: INFO: (7) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:462/proxy/: tls qux (200; 4.015012ms)
Aug 26 16:33:03.783: INFO: (7) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:460/proxy/: tls baz (200; 4.027105ms)
Aug 26 16:33:03.783: INFO: (7) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:1080/proxy/: test<... (200; 3.970341ms)
Aug 26 16:33:03.783: INFO: (7) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 4.039949ms)
Aug 26 16:33:03.783: INFO: (7) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 4.013052ms)
Aug 26 16:33:03.783: INFO: (7) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:1080/proxy/: ... (200; 4.157954ms)
Aug 26 16:33:03.783: INFO: (7) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: ... (200; 3.623015ms)
Aug 26 16:33:03.796: INFO: (8) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 3.64633ms)
Aug 26 16:33:03.796: INFO: (8) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm/proxy/: test (200; 3.626603ms)
Aug 26 16:33:03.796: INFO: (8) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: test<... (200; 4.481813ms)
Aug 26 16:33:03.797: INFO: (8) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 5.138493ms)
Aug 26 16:33:03.797: INFO: (8) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname2/proxy/: tls qux (200; 5.129096ms)
Aug 26 16:33:03.798: INFO: (8) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname2/proxy/: bar (200; 5.548461ms)
Aug 26 16:33:03.798: INFO: (8) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname1/proxy/: foo (200; 5.531018ms)
Aug 26 16:33:03.798: INFO: (8) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname1/proxy/: tls baz (200; 5.451963ms)
Aug 26 16:33:03.798: INFO: (8) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:462/proxy/: tls qux (200; 5.514704ms)
Aug 26 16:33:03.798: INFO: (8) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname2/proxy/: bar (200; 5.55924ms)
Aug 26 16:33:03.801: INFO: (9) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 2.896177ms)
Aug 26 16:33:03.801: INFO: (9) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 2.996368ms)
Aug 26 16:33:03.801: INFO: (9) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:462/proxy/: tls qux (200; 3.071298ms)
Aug 26 16:33:03.801: INFO: (9) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:1080/proxy/: test<... (200; 3.59393ms)
Aug 26 16:33:03.802: INFO: (9) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:1080/proxy/: ... (200; 4.193162ms)
Aug 26 16:33:03.802: INFO: (9) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 4.109359ms)
Aug 26 16:33:03.802: INFO: (9) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm/proxy/: test (200; 4.119193ms)
Aug 26 16:33:03.802: INFO: (9) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: test (200; 3.498237ms)
Aug 26 16:33:03.807: INFO: (10) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: ... (200; 4.217256ms)
Aug 26 16:33:03.808: INFO: (10) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname1/proxy/: foo (200; 4.504506ms)
Aug 26 16:33:03.808: INFO: (10) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 4.712804ms)
Aug 26 16:33:03.808: INFO: (10) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 4.786998ms)
Aug 26 16:33:03.808: INFO: (10) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname2/proxy/: tls qux (200; 4.777576ms)
Aug 26 16:33:03.808: INFO: (10) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname2/proxy/: bar (200; 4.795142ms)
Aug 26 16:33:03.808: INFO: (10) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname1/proxy/: tls baz (200; 4.843013ms)
Aug 26 16:33:03.808: INFO: (10) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:462/proxy/: tls qux (200; 4.998881ms)
Aug 26 16:33:03.808: INFO: (10) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:1080/proxy/: test<... (200; 4.987108ms)
Aug 26 16:33:03.808: INFO: (10) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 5.022989ms)
Aug 26 16:33:03.808: INFO: (10) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 4.96956ms)
Aug 26 16:33:03.810: INFO: (11) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 1.862797ms)
Aug 26 16:33:03.813: INFO: (11) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:460/proxy/: tls baz (200; 4.545805ms)
Aug 26 16:33:03.813: INFO: (11) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 4.770336ms)
Aug 26 16:33:03.813: INFO: (11) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: test (200; 4.98184ms)
Aug 26 16:33:03.813: INFO: (11) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:1080/proxy/: test<... (200; 5.017526ms)
Aug 26 16:33:03.813: INFO: (11) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:1080/proxy/: ... (200; 5.03143ms)
Aug 26 16:33:03.814: INFO: (11) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname2/proxy/: bar (200; 5.644466ms)
Aug 26 16:33:03.814: INFO: (11) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname1/proxy/: foo (200; 5.674502ms)
Aug 26 16:33:03.814: INFO: (11) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname2/proxy/: bar (200; 5.690999ms)
Aug 26 16:33:03.814: INFO: (11) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 5.661591ms)
Aug 26 16:33:03.814: INFO: (11) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname1/proxy/: tls baz (200; 5.685507ms)
Aug 26 16:33:03.814: INFO: (11) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname2/proxy/: tls qux (200; 5.673952ms)
Aug 26 16:33:03.816: INFO: (12) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:1080/proxy/: ... (200; 1.877391ms)
Aug 26 16:33:03.818: INFO: (12) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 3.743704ms)
Aug 26 16:33:03.818: INFO: (12) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 4.026379ms)
Aug 26 16:33:03.818: INFO: (12) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:460/proxy/: tls baz (200; 4.027587ms)
Aug 26 16:33:03.818: INFO: (12) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname1/proxy/: tls baz (200; 4.012293ms)
Aug 26 16:33:03.818: INFO: (12) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 4.077686ms)
Aug 26 16:33:03.818: INFO: (12) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: test<... (200; 4.07966ms)
Aug 26 16:33:03.818: INFO: (12) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm/proxy/: test (200; 4.141132ms)
Aug 26 16:33:03.818: INFO: (12) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 4.156859ms)
Aug 26 16:33:03.818: INFO: (12) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:462/proxy/: tls qux (200; 4.268406ms)
Aug 26 16:33:03.819: INFO: (12) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname2/proxy/: tls qux (200; 4.661444ms)
Aug 26 16:33:03.819: INFO: (12) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname2/proxy/: bar (200; 5.203952ms)
Aug 26 16:33:03.820: INFO: (12) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname2/proxy/: bar (200; 5.971692ms)
Aug 26 16:33:03.820: INFO: (12) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 5.976627ms)
Aug 26 16:33:03.820: INFO: (12) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname1/proxy/: foo (200; 5.982768ms)
Aug 26 16:33:03.823: INFO: (13) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 2.971219ms)
Aug 26 16:33:03.823: INFO: (13) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 3.007375ms)
Aug 26 16:33:03.823: INFO: (13) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:1080/proxy/: test<... (200; 2.96319ms)
Aug 26 16:33:03.823: INFO: (13) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 3.042264ms)
Aug 26 16:33:03.823: INFO: (13) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: test (200; 5.128471ms)
Aug 26 16:33:03.825: INFO: (13) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:462/proxy/: tls qux (200; 5.101124ms)
Aug 26 16:33:03.825: INFO: (13) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 5.196091ms)
Aug 26 16:33:03.825: INFO: (13) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname1/proxy/: foo (200; 5.121982ms)
Aug 26 16:33:03.825: INFO: (13) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname1/proxy/: tls baz (200; 5.323821ms)
Aug 26 16:33:03.825: INFO: (13) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:460/proxy/: tls baz (200; 5.375735ms)
Aug 26 16:33:03.825: INFO: (13) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname2/proxy/: tls qux (200; 5.340359ms)
Aug 26 16:33:03.825: INFO: (13) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:1080/proxy/: ... (200; 5.337082ms)
Aug 26 16:33:03.829: INFO: (14) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 3.568339ms)
Aug 26 16:33:03.829: INFO: (14) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:462/proxy/: tls qux (200; 3.809668ms)
Aug 26 16:33:03.829: INFO: (14) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 3.877703ms)
Aug 26 16:33:03.829: INFO: (14) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:1080/proxy/: ... (200; 3.870263ms)
Aug 26 16:33:03.829: INFO: (14) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:1080/proxy/: test<... (200; 3.920538ms)
Aug 26 16:33:03.830: INFO: (14) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 3.965882ms)
Aug 26 16:33:03.830: INFO: (14) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:460/proxy/: tls baz (200; 4.297082ms)
Aug 26 16:33:03.830: INFO: (14) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm/proxy/: test (200; 4.369212ms)
Aug 26 16:33:03.830: INFO: (14) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 4.351484ms)
Aug 26 16:33:03.830: INFO: (14) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: ... (200; 281.753288ms)
Aug 26 16:33:04.114: INFO: (15) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm/proxy/: test (200; 282.388154ms)
Aug 26 16:33:04.115: INFO: (15) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 283.036514ms)
Aug 26 16:33:04.115: INFO: (15) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:460/proxy/: tls baz (200; 283.459314ms)
Aug 26 16:33:04.116: INFO: (15) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname1/proxy/: foo (200; 284.51222ms)
Aug 26 16:33:04.116: INFO: (15) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:1080/proxy/: test<... (200; 284.921955ms)
Aug 26 16:33:04.117: INFO: (15) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname1/proxy/: tls baz (200; 285.276714ms)
Aug 26 16:33:04.117: INFO: (15) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname2/proxy/: tls qux (200; 285.222959ms)
Aug 26 16:33:04.117: INFO: (15) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname2/proxy/: bar (200; 285.342063ms)
Aug 26 16:33:04.117: INFO: (15) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 285.259884ms)
Aug 26 16:33:04.117: INFO: (15) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 285.336777ms)
Aug 26 16:33:04.117: INFO: (15) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname2/proxy/: bar (200; 285.555833ms)
Aug 26 16:33:04.120: INFO: (16) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 3.279551ms)
Aug 26 16:33:04.121: INFO: (16) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:1080/proxy/: ... (200; 4.037325ms)
Aug 26 16:33:04.121: INFO: (16) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 4.223393ms)
Aug 26 16:33:04.121: INFO: (16) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: test (200; 4.305612ms)
Aug 26 16:33:04.121: INFO: (16) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname2/proxy/: tls qux (200; 4.326839ms)
Aug 26 16:33:04.121: INFO: (16) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:1080/proxy/: test<... (200; 4.324365ms)
Aug 26 16:33:04.122: INFO: (16) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 5.020098ms)
Aug 26 16:33:04.122: INFO: (16) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:462/proxy/: tls qux (200; 5.140652ms)
Aug 26 16:33:04.122: INFO: (16) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 5.333941ms)
Aug 26 16:33:04.123: INFO: (16) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname1/proxy/: foo (200; 5.57877ms)
Aug 26 16:33:04.123: INFO: (16) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname2/proxy/: bar (200; 5.488023ms)
Aug 26 16:33:04.123: INFO: (16) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname2/proxy/: bar (200; 5.539731ms)
Aug 26 16:33:04.123: INFO: (16) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname1/proxy/: tls baz (200; 5.637576ms)
Aug 26 16:33:04.127: INFO: (17) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 4.662115ms)
Aug 26 16:33:04.127: INFO: (17) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm/proxy/: test (200; 4.659484ms)
Aug 26 16:33:04.128: INFO: (17) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: ... (200; 5.024673ms)
Aug 26 16:33:04.128: INFO: (17) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:462/proxy/: tls qux (200; 5.178563ms)
Aug 26 16:33:04.128: INFO: (17) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 5.390782ms)
Aug 26 16:33:04.128: INFO: (17) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 5.353232ms)
Aug 26 16:33:04.128: INFO: (17) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:460/proxy/: tls baz (200; 5.463976ms)
Aug 26 16:33:04.128: INFO: (17) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 5.536768ms)
Aug 26 16:33:04.550: INFO: (17) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname2/proxy/: bar (200; 427.136567ms)
Aug 26 16:33:04.550: INFO: (17) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname1/proxy/: tls baz (200; 427.150277ms)
Aug 26 16:33:04.550: INFO: (17) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname2/proxy/: tls qux (200; 427.368512ms)
Aug 26 16:33:04.550: INFO: (17) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname2/proxy/: bar (200; 427.35468ms)
Aug 26 16:33:04.550: INFO: (17) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 427.418394ms)
Aug 26 16:33:04.550: INFO: (17) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:1080/proxy/: test<... (200; 427.519052ms)
Aug 26 16:33:04.550: INFO: (17) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname1/proxy/: foo (200; 427.650841ms)
Aug 26 16:33:04.555: INFO: (18) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 4.207226ms)
Aug 26 16:33:04.555: INFO: (18) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:1080/proxy/: ... (200; 4.764465ms)
Aug 26 16:33:04.556: INFO: (18) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 5.519806ms)
Aug 26 16:33:04.556: INFO: (18) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:1080/proxy/: test<... (200; 5.541762ms)
Aug 26 16:33:04.556: INFO: (18) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname1/proxy/: foo (200; 5.892568ms)
Aug 26 16:33:04.557: INFO: (18) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 6.232369ms)
Aug 26 16:33:04.557: INFO: (18) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm/proxy/: test (200; 6.178772ms)
Aug 26 16:33:04.557: INFO: (18) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname2/proxy/: bar (200; 6.310207ms)
Aug 26 16:33:04.557: INFO: (18) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname2/proxy/: bar (200; 6.379523ms)
Aug 26 16:33:04.557: INFO: (18) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 6.496424ms)
Aug 26 16:33:04.558: INFO: (18) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 7.246501ms)
Aug 26 16:33:04.558: INFO: (18) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname2/proxy/: tls qux (200; 7.323487ms)
Aug 26 16:33:04.558: INFO: (18) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: test<... (200; 4.927877ms)
Aug 26 16:33:04.564: INFO: (19) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 5.869866ms)
Aug 26 16:33:04.564: INFO: (19) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 5.837148ms)
Aug 26 16:33:04.564: INFO: (19) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:160/proxy/: foo (200; 5.907841ms)
Aug 26 16:33:04.564: INFO: (19) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:462/proxy/: tls qux (200; 6.263173ms)
Aug 26 16:33:04.565: INFO: (19) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:460/proxy/: tls baz (200; 6.445155ms)
Aug 26 16:33:04.565: INFO: (19) /api/v1/namespaces/proxy-5655/pods/https:proxy-service-jf6f4-xgbqm:443/proxy/: test (200; 6.405574ms)
Aug 26 16:33:04.565: INFO: (19) /api/v1/namespaces/proxy-5655/pods/http:proxy-service-jf6f4-xgbqm:1080/proxy/: ... (200; 6.566664ms)
Aug 26 16:33:04.565: INFO: (19) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname1/proxy/: tls baz (200; 6.951159ms)
Aug 26 16:33:04.566: INFO: (19) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname2/proxy/: bar (200; 7.372249ms)
Aug 26 16:33:04.566: INFO: (19) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname1/proxy/: foo (200; 7.676568ms)
Aug 26 16:33:04.566: INFO: (19) /api/v1/namespaces/proxy-5655/pods/proxy-service-jf6f4-xgbqm:162/proxy/: bar (200; 7.869626ms)
Aug 26 16:33:04.566: INFO: (19) /api/v1/namespaces/proxy-5655/services/https:proxy-service-jf6f4:tlsportname2/proxy/: tls qux (200; 7.883769ms)
Aug 26 16:33:04.566: INFO: (19) /api/v1/namespaces/proxy-5655/services/http:proxy-service-jf6f4:portname2/proxy/: bar (200; 8.081346ms)
Aug 26 16:33:04.567: INFO: (19) /api/v1/namespaces/proxy-5655/services/proxy-service-jf6f4:portname1/proxy/: foo (200; 8.612219ms)
STEP: deleting ReplicationController proxy-service-jf6f4 in namespace proxy-5655, will wait for the garbage collector to delete the pods
Aug 26 16:33:04.626: INFO: Deleting ReplicationController proxy-service-jf6f4 took: 7.015967ms
Aug 26 16:33:04.926: INFO: Terminating ReplicationController proxy-service-jf6f4 pods took: 300.244998ms
[AfterEach] version v1
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:33:10.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5655" for this suite.

• [SLOW TEST:31.693 seconds]
[sig-network] Proxy
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":275,"completed":47,"skipped":777,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:33:11.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 16:33:13.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1233'
Aug 26 16:33:26.175: INFO: stderr: ""
Aug 26 16:33:26.175: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Aug 26 16:33:26.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1233'
Aug 26 16:33:27.161: INFO: stderr: ""
Aug 26 16:33:27.161: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 26 16:33:28.166: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 16:33:28.166: INFO: Found 0 / 1
Aug 26 16:33:29.166: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 16:33:29.166: INFO: Found 0 / 1
Aug 26 16:33:30.165: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 16:33:30.165: INFO: Found 0 / 1
Aug 26 16:33:31.165: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 16:33:31.165: INFO: Found 1 / 1
Aug 26 16:33:31.165: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 26 16:33:31.169: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 16:33:31.169: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 26 16:33:31.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config describe pod agnhost-master-4tsjm --namespace=kubectl-1233'
Aug 26 16:33:31.291: INFO: stderr: ""
Aug 26 16:33:31.291: INFO: stdout: "Name:         agnhost-master-4tsjm\nNamespace:    kubectl-1233\nPriority:     0\nNode:         kali-worker/172.18.0.15\nStart Time:   Wed, 26 Aug 2020 16:33:26 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.173\nIPs:\n  IP:           10.244.1.173\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://6eb4475b7dad989be307b6edc924f83d8b44ab64a6a8681cefeee71cc2d27515\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 26 Aug 2020 16:33:30 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-t4tgb (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-t4tgb:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-t4tgb\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                  Message\n  ----    ------     ----  ----                  -------\n  Normal  Scheduled  5s    default-scheduler     Successfully assigned kubectl-1233/agnhost-master-4tsjm to kali-worker\n  Normal  Pulled     3s    kubelet, kali-worker  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    2s    kubelet, kali-worker  Created container agnhost-master\n  Normal  Started    1s    kubelet, kali-worker  Started container agnhost-master\n"
Aug 26 16:33:31.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1233'
Aug 26 16:33:31.419: INFO: stderr: ""
Aug 26 16:33:31.419: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-1233\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: agnhost-master-4tsjm\n"
Aug 26 16:33:31.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1233'
Aug 26 16:33:31.520: INFO: stderr: ""
Aug 26 16:33:31.520: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-1233\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.99.159.204\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.173:6379\nSession Affinity:  None\nEvents:            \n"
Aug 26 16:33:31.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config describe node kali-control-plane'
Aug 26 16:33:31.639: INFO: stderr: ""
Aug 26 16:33:31.639: INFO: stdout: "Name:               kali-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kali-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 23 Aug 2020 15:12:35 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kali-control-plane\n  AcquireTime:     \n  RenewTime:       Wed, 26 Aug 2020 16:33:22 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Wed, 26 Aug 2020 16:31:29 +0000   Sun, 23 Aug 2020 15:12:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Wed, 26 Aug 2020 16:31:29 +0000   Sun, 23 Aug 2020 15:12:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Wed, 26 Aug 2020 16:31:29 +0000   Sun, 23 Aug 2020 15:12:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Wed, 26 Aug 2020 16:31:29 +0000   Sun, 23 Aug 2020 15:13:30 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.16\n  Hostname:    kali-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 2cdec6c7db1f4ffb92010874f8f6c78a\n  System UUID:                97843c5f-7109-4963-bbac-ed94fa5ea417\n  Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n  Kernel Version:             4.15.0-109-generic\n  OS Image:                   Ubuntu Groovy Gorilla (development branch)\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.0-rc.1-4-g43366250\n  Kubelet Version:            v1.18.8\n  Kube-Proxy Version:         v1.18.8\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-4dkcx                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     3d1h\n  kube-system                 coredns-66bff467f8-wt2xm                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     3d1h\n  kube-system                 etcd-kali-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d1h\n  kube-system                 kindnet-4vm7t                                 100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      3d1h\n  kube-system                 kube-apiserver-kali-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         3d1h\n  kube-system                 kube-controller-manager-kali-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         3d1h\n  kube-system                 kube-proxy-lnmvk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d1h\n  kube-system                 kube-scheduler-kali-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         3d1h\n  local-path-storage          local-path-provisioner-5b4b545c55-bfxpd       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d1h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:              \n"
Aug 26 16:33:31.639: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config describe namespace kubectl-1233'
Aug 26 16:33:31.752: INFO: stderr: ""
Aug 26 16:33:31.752: INFO: stdout: "Name:         kubectl-1233\nLabels:       e2e-framework=kubectl\n              e2e-run=14a5f9cd-b16e-4efd-8686-051fc7845f57\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:33:31.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1233" for this suite.

• [SLOW TEST:20.315 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":275,"completed":48,"skipped":779,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:33:31.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 26 16:33:31.872: INFO: Waiting up to 5m0s for pod "pod-efd50df6-7cc5-4a3f-8c0f-c443b379d75b" in namespace "emptydir-8348" to be "Succeeded or Failed"
Aug 26 16:33:31.923: INFO: Pod "pod-efd50df6-7cc5-4a3f-8c0f-c443b379d75b": Phase="Pending", Reason="", readiness=false. Elapsed: 51.381542ms
Aug 26 16:33:33.927: INFO: Pod "pod-efd50df6-7cc5-4a3f-8c0f-c443b379d75b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055176487s
Aug 26 16:33:35.931: INFO: Pod "pod-efd50df6-7cc5-4a3f-8c0f-c443b379d75b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059242056s
Aug 26 16:33:37.935: INFO: Pod "pod-efd50df6-7cc5-4a3f-8c0f-c443b379d75b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062464767s
Aug 26 16:33:40.071: INFO: Pod "pod-efd50df6-7cc5-4a3f-8c0f-c443b379d75b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.198854022s
STEP: Saw pod success
Aug 26 16:33:40.071: INFO: Pod "pod-efd50df6-7cc5-4a3f-8c0f-c443b379d75b" satisfied condition "Succeeded or Failed"
Aug 26 16:33:40.074: INFO: Trying to get logs from node kali-worker pod pod-efd50df6-7cc5-4a3f-8c0f-c443b379d75b container test-container: 
STEP: delete the pod
Aug 26 16:33:40.207: INFO: Waiting for pod pod-efd50df6-7cc5-4a3f-8c0f-c443b379d75b to disappear
Aug 26 16:33:40.223: INFO: Pod pod-efd50df6-7cc5-4a3f-8c0f-c443b379d75b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:33:40.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8348" for this suite.

• [SLOW TEST:8.472 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":831,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:33:40.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-c63457b0-c3b2-4376-9083-c91be47fe747
STEP: Creating a pod to test consume configMaps
Aug 26 16:33:40.308: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-14cba0bc-7056-4251-a3a1-b56120be3d27" in namespace "projected-908" to be "Succeeded or Failed"
Aug 26 16:33:40.322: INFO: Pod "pod-projected-configmaps-14cba0bc-7056-4251-a3a1-b56120be3d27": Phase="Pending", Reason="", readiness=false. Elapsed: 14.138842ms
Aug 26 16:33:42.326: INFO: Pod "pod-projected-configmaps-14cba0bc-7056-4251-a3a1-b56120be3d27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018091781s
Aug 26 16:33:44.330: INFO: Pod "pod-projected-configmaps-14cba0bc-7056-4251-a3a1-b56120be3d27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021455412s
STEP: Saw pod success
Aug 26 16:33:44.330: INFO: Pod "pod-projected-configmaps-14cba0bc-7056-4251-a3a1-b56120be3d27" satisfied condition "Succeeded or Failed"
Aug 26 16:33:44.355: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-14cba0bc-7056-4251-a3a1-b56120be3d27 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 16:33:44.430: INFO: Waiting for pod pod-projected-configmaps-14cba0bc-7056-4251-a3a1-b56120be3d27 to disappear
Aug 26 16:33:44.438: INFO: Pod pod-projected-configmaps-14cba0bc-7056-4251-a3a1-b56120be3d27 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:33:44.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-908" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":836,"failed":0}
SSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:33:44.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 26 16:33:49.638: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:33:49.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5711" for this suite.

• [SLOW TEST:5.322 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":51,"skipped":843,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:33:49.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name projected-secret-test-54324dac-dd6d-4fca-ad22-20b7aa592339
STEP: Creating a pod to test consume secrets
Aug 26 16:33:49.908: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f8809f0e-d4a6-486a-a821-6f43d2ad40a9" in namespace "projected-8128" to be "Succeeded or Failed"
Aug 26 16:33:49.984: INFO: Pod "pod-projected-secrets-f8809f0e-d4a6-486a-a821-6f43d2ad40a9": Phase="Pending", Reason="", readiness=false. Elapsed: 75.947297ms
Aug 26 16:33:52.091: INFO: Pod "pod-projected-secrets-f8809f0e-d4a6-486a-a821-6f43d2ad40a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182887692s
Aug 26 16:33:54.112: INFO: Pod "pod-projected-secrets-f8809f0e-d4a6-486a-a821-6f43d2ad40a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204756612s
Aug 26 16:33:56.115: INFO: Pod "pod-projected-secrets-f8809f0e-d4a6-486a-a821-6f43d2ad40a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.207599099s
STEP: Saw pod success
Aug 26 16:33:56.115: INFO: Pod "pod-projected-secrets-f8809f0e-d4a6-486a-a821-6f43d2ad40a9" satisfied condition "Succeeded or Failed"
Aug 26 16:33:56.117: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-f8809f0e-d4a6-486a-a821-6f43d2ad40a9 container secret-volume-test: 
STEP: delete the pod
Aug 26 16:33:56.673: INFO: Waiting for pod pod-projected-secrets-f8809f0e-d4a6-486a-a821-6f43d2ad40a9 to disappear
Aug 26 16:33:56.837: INFO: Pod pod-projected-secrets-f8809f0e-d4a6-486a-a821-6f43d2ad40a9 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:33:56.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8128" for this suite.

• [SLOW TEST:7.072 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":849,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:33:56.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 16:33:59.175: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 16:34:01.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056439, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056439, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056439, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056439, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:34:03.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056439, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056439, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056439, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056439, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:34:05.873: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056439, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056439, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056439, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056439, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 16:34:09.282: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:34:22.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9282" for this suite.
STEP: Destroying namespace "webhook-9282-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:25.911 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":53,"skipped":856,"failed":0}
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:34:22.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9528
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating stateful set ss in namespace statefulset-9528
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9528
Aug 26 16:34:22.975: INFO: Found 0 stateful pods, waiting for 1
Aug 26 16:34:32.979: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 26 16:34:32.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 16:34:33.258: INFO: stderr: "I0826 16:34:33.114508     792 log.go:172] (0xc0009c33f0) (0xc0009546e0) Create stream\nI0826 16:34:33.114569     792 log.go:172] (0xc0009c33f0) (0xc0009546e0) Stream added, broadcasting: 1\nI0826 16:34:33.119667     792 log.go:172] (0xc0009c33f0) Reply frame received for 1\nI0826 16:34:33.119703     792 log.go:172] (0xc0009c33f0) (0xc00067b680) Create stream\nI0826 16:34:33.119712     792 log.go:172] (0xc0009c33f0) (0xc00067b680) Stream added, broadcasting: 3\nI0826 16:34:33.120905     792 log.go:172] (0xc0009c33f0) Reply frame received for 3\nI0826 16:34:33.120952     792 log.go:172] (0xc0009c33f0) (0xc00054caa0) Create stream\nI0826 16:34:33.120965     792 log.go:172] (0xc0009c33f0) (0xc00054caa0) Stream added, broadcasting: 5\nI0826 16:34:33.121960     792 log.go:172] (0xc0009c33f0) Reply frame received for 5\nI0826 16:34:33.197025     792 log.go:172] (0xc0009c33f0) Data frame received for 5\nI0826 16:34:33.197054     792 log.go:172] (0xc00054caa0) (5) Data frame handling\nI0826 16:34:33.197073     792 log.go:172] (0xc00054caa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 16:34:33.244038     792 log.go:172] (0xc0009c33f0) Data frame received for 3\nI0826 16:34:33.244083     792 log.go:172] (0xc00067b680) (3) Data frame handling\nI0826 16:34:33.244126     792 log.go:172] (0xc00067b680) (3) Data frame sent\nI0826 16:34:33.244539     792 log.go:172] (0xc0009c33f0) Data frame received for 3\nI0826 16:34:33.244586     792 log.go:172] (0xc00067b680) (3) Data frame handling\nI0826 16:34:33.245111     792 log.go:172] (0xc0009c33f0) Data frame received for 5\nI0826 16:34:33.245154     792 log.go:172] (0xc00054caa0) (5) Data frame handling\nI0826 16:34:33.247340     792 log.go:172] (0xc0009c33f0) Data frame received for 1\nI0826 16:34:33.247376     792 log.go:172] (0xc0009546e0) (1) Data frame handling\nI0826 16:34:33.247393     792 log.go:172] (0xc0009546e0) (1) Data frame sent\nI0826 16:34:33.247415     792 log.go:172] (0xc0009c33f0) (0xc0009546e0) Stream removed, broadcasting: 1\nI0826 16:34:33.247438     792 log.go:172] (0xc0009c33f0) Go away received\nI0826 16:34:33.247962     792 log.go:172] (0xc0009c33f0) (0xc0009546e0) Stream removed, broadcasting: 1\nI0826 16:34:33.247989     792 log.go:172] (0xc0009c33f0) (0xc00067b680) Stream removed, broadcasting: 3\nI0826 16:34:33.248007     792 log.go:172] (0xc0009c33f0) (0xc00054caa0) Stream removed, broadcasting: 5\n"
Aug 26 16:34:33.258: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 16:34:33.258: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 16:34:33.262: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 26 16:34:43.267: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 16:34:43.267: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 16:34:43.284: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 26 16:34:43.284: INFO: ss-0  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:22 +0000 UTC  }]
Aug 26 16:34:43.284: INFO: 
Aug 26 16:34:43.285: INFO: StatefulSet ss has not reached scale 3, at 1
Aug 26 16:34:44.290: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992743538s
Aug 26 16:34:45.481: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987575537s
Aug 26 16:34:46.592: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.796179723s
Aug 26 16:34:47.684: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.685369759s
Aug 26 16:34:48.693: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.592796525s
Aug 26 16:34:49.698: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.584436332s
Aug 26 16:34:50.703: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.578954765s
Aug 26 16:34:51.707: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.57394144s
Aug 26 16:34:52.711: INFO: Verifying statefulset ss doesn't scale past 3 for another 569.908153ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9528
Aug 26 16:34:53.716: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:34:53.910: INFO: stderr: "I0826 16:34:53.842262     812 log.go:172] (0xc000a4e000) (0xc0008ce000) Create stream\nI0826 16:34:53.842312     812 log.go:172] (0xc000a4e000) (0xc0008ce000) Stream added, broadcasting: 1\nI0826 16:34:53.844950     812 log.go:172] (0xc000a4e000) Reply frame received for 1\nI0826 16:34:53.844978     812 log.go:172] (0xc000a4e000) (0xc00021d5e0) Create stream\nI0826 16:34:53.844985     812 log.go:172] (0xc000a4e000) (0xc00021d5e0) Stream added, broadcasting: 3\nI0826 16:34:53.845547     812 log.go:172] (0xc000a4e000) Reply frame received for 3\nI0826 16:34:53.845569     812 log.go:172] (0xc000a4e000) (0xc000509ae0) Create stream\nI0826 16:34:53.845577     812 log.go:172] (0xc000a4e000) (0xc000509ae0) Stream added, broadcasting: 5\nI0826 16:34:53.846300     812 log.go:172] (0xc000a4e000) Reply frame received for 5\nI0826 16:34:53.906588     812 log.go:172] (0xc000a4e000) Data frame received for 5\nI0826 16:34:53.906608     812 log.go:172] (0xc000509ae0) (5) Data frame handling\nI0826 16:34:53.906616     812 log.go:172] (0xc000509ae0) (5) Data frame sent\nI0826 16:34:53.906622     812 log.go:172] (0xc000a4e000) Data frame received for 5\nI0826 16:34:53.906628     812 log.go:172] (0xc000509ae0) (5) Data frame handling\nI0826 16:34:53.906636     812 log.go:172] (0xc000a4e000) Data frame received for 3\nI0826 16:34:53.906641     812 log.go:172] (0xc00021d5e0) (3) Data frame handling\nI0826 16:34:53.906647     812 log.go:172] (0xc00021d5e0) (3) Data frame sent\nI0826 16:34:53.906652     812 log.go:172] (0xc000a4e000) Data frame received for 3\nI0826 16:34:53.906657     812 log.go:172] (0xc00021d5e0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 16:34:53.907295     812 log.go:172] (0xc000a4e000) Data frame received for 1\nI0826 16:34:53.907314     812 log.go:172] (0xc0008ce000) (1) Data frame handling\nI0826 16:34:53.907328     812 log.go:172] (0xc0008ce000) (1) Data frame sent\nI0826 16:34:53.907466     812 log.go:172] (0xc000a4e000) (0xc0008ce000) Stream removed, broadcasting: 1\nI0826 16:34:53.907749     812 log.go:172] (0xc000a4e000) Go away received\nI0826 16:34:53.907810     812 log.go:172] (0xc000a4e000) (0xc0008ce000) Stream removed, broadcasting: 1\nI0826 16:34:53.907834     812 log.go:172] (0xc000a4e000) (0xc00021d5e0) Stream removed, broadcasting: 3\nI0826 16:34:53.907849     812 log.go:172] (0xc000a4e000) (0xc000509ae0) Stream removed, broadcasting: 5\n"
Aug 26 16:34:53.911: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 16:34:53.911: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 16:34:53.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:34:54.167: INFO: stderr: "I0826 16:34:54.094350     829 log.go:172] (0xc0009f6370) (0xc0009c8dc0) Create stream\nI0826 16:34:54.094415     829 log.go:172] (0xc0009f6370) (0xc0009c8dc0) Stream added, broadcasting: 1\nI0826 16:34:54.099047     829 log.go:172] (0xc0009f6370) Reply frame received for 1\nI0826 16:34:54.099072     829 log.go:172] (0xc0009f6370) (0xc00065f7c0) Create stream\nI0826 16:34:54.099078     829 log.go:172] (0xc0009f6370) (0xc00065f7c0) Stream added, broadcasting: 3\nI0826 16:34:54.099652     829 log.go:172] (0xc0009f6370) Reply frame received for 3\nI0826 16:34:54.099675     829 log.go:172] (0xc0009f6370) (0xc0004dabe0) Create stream\nI0826 16:34:54.099682     829 log.go:172] (0xc0009f6370) (0xc0004dabe0) Stream added, broadcasting: 5\nI0826 16:34:54.100428     829 log.go:172] (0xc0009f6370) Reply frame received for 5\nI0826 16:34:54.156615     829 log.go:172] (0xc0009f6370) Data frame received for 3\nI0826 16:34:54.156676     829 log.go:172] (0xc00065f7c0) (3) Data frame handling\nI0826 16:34:54.156695     829 log.go:172] (0xc00065f7c0) (3) Data frame sent\nI0826 16:34:54.156702     829 log.go:172] (0xc0009f6370) Data frame received for 3\nI0826 16:34:54.156707     829 log.go:172] (0xc00065f7c0) (3) Data frame handling\nI0826 16:34:54.156785     829 log.go:172] (0xc0009f6370) Data frame received for 5\nI0826 16:34:54.156793     829 log.go:172] (0xc0004dabe0) (5) Data frame handling\nI0826 16:34:54.156805     829 log.go:172] (0xc0004dabe0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0826 16:34:54.157093     829 log.go:172] (0xc0009f6370) Data frame received for 5\nI0826 16:34:54.157111     829 log.go:172] (0xc0004dabe0) (5) Data frame handling\nI0826 16:34:54.158562     829 log.go:172] (0xc0009f6370) Data frame received for 1\nI0826 16:34:54.158587     829 log.go:172] (0xc0009c8dc0) (1) Data frame handling\nI0826 16:34:54.158702     829 log.go:172] (0xc0009c8dc0) (1) Data frame sent\nI0826 16:34:54.158747     829 log.go:172] (0xc0009f6370) (0xc0009c8dc0) Stream removed, broadcasting: 1\nI0826 16:34:54.158789     829 log.go:172] (0xc0009f6370) Go away received\nI0826 16:34:54.158974     829 log.go:172] (0xc0009f6370) (0xc0009c8dc0) Stream removed, broadcasting: 1\nI0826 16:34:54.158987     829 log.go:172] (0xc0009f6370) (0xc00065f7c0) Stream removed, broadcasting: 3\nI0826 16:34:54.158994     829 log.go:172] (0xc0009f6370) (0xc0004dabe0) Stream removed, broadcasting: 5\n"
Aug 26 16:34:54.167: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 16:34:54.167: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 16:34:54.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:34:56.007: INFO: stderr: "I0826 16:34:54.393329     849 log.go:172] (0xc000924a50) (0xc000ba25a0) Create stream\nI0826 16:34:54.393373     849 log.go:172] (0xc000924a50) (0xc000ba25a0) Stream added, broadcasting: 1\nI0826 16:34:54.394919     849 log.go:172] (0xc000924a50) Reply frame received for 1\nI0826 16:34:54.394939     849 log.go:172] (0xc000924a50) (0xc000a56320) Create stream\nI0826 16:34:54.394949     849 log.go:172] (0xc000924a50) (0xc000a56320) Stream added, broadcasting: 3\nI0826 16:34:54.395486     849 log.go:172] (0xc000924a50) Reply frame received for 3\nI0826 16:34:54.395507     849 log.go:172] (0xc000924a50) (0xc000ba2640) Create stream\nI0826 16:34:54.395514     849 log.go:172] (0xc000924a50) (0xc000ba2640) Stream added, broadcasting: 5\nI0826 16:34:54.395989     849 log.go:172] (0xc000924a50) Reply frame received for 5\nI0826 16:34:56.001111     849 log.go:172] (0xc000924a50) Data frame received for 3\nI0826 16:34:56.001136     849 log.go:172] (0xc000924a50) Data frame received for 5\nI0826 16:34:56.001158     849 log.go:172] (0xc000ba2640) (5) Data frame handling\nI0826 16:34:56.001175     849 log.go:172] (0xc000ba2640) (5) Data frame sent\nI0826 16:34:56.001193     849 log.go:172] (0xc000924a50) Data frame received for 5\nI0826 16:34:56.001207     849 log.go:172] (0xc000ba2640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0826 16:34:56.001224     849 log.go:172] (0xc000a56320) (3) Data frame handling\nI0826 16:34:56.001241     849 log.go:172] (0xc000a56320) (3) Data frame sent\nI0826 16:34:56.001249     849 log.go:172] (0xc000924a50) Data frame received for 3\nI0826 16:34:56.001258     849 log.go:172] (0xc000a56320) (3) Data frame handling\nI0826 16:34:56.002369     849 log.go:172] (0xc000924a50) Data frame received for 1\nI0826 16:34:56.002381     849 log.go:172] (0xc000ba25a0) (1) Data frame handling\nI0826 16:34:56.002388     849 log.go:172] (0xc000ba25a0) (1) Data frame sent\nI0826 16:34:56.002395     849 log.go:172] (0xc000924a50) (0xc000ba25a0) Stream removed, broadcasting: 1\nI0826 16:34:56.002415     849 log.go:172] (0xc000924a50) Go away received\nI0826 16:34:56.002574     849 log.go:172] (0xc000924a50) (0xc000ba25a0) Stream removed, broadcasting: 1\nI0826 16:34:56.002583     849 log.go:172] (0xc000924a50) (0xc000a56320) Stream removed, broadcasting: 3\nI0826 16:34:56.002588     849 log.go:172] (0xc000924a50) (0xc000ba2640) Stream removed, broadcasting: 5\n"
Aug 26 16:34:56.008: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 16:34:56.008: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 16:34:56.229: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 16:34:56.229: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 16:34:56.229: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 26 16:34:56.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 16:34:56.798: INFO: stderr: "I0826 16:34:56.745833     869 log.go:172] (0xc0009f5080) (0xc000520460) Create stream\nI0826 16:34:56.745870     869 log.go:172] (0xc0009f5080) (0xc000520460) Stream added, broadcasting: 1\nI0826 16:34:56.749436     869 log.go:172] (0xc0009f5080) Reply frame received for 1\nI0826 16:34:56.749470     869 log.go:172] (0xc0009f5080) (0xc000520500) Create stream\nI0826 16:34:56.749480     869 log.go:172] (0xc0009f5080) (0xc000520500) Stream added, broadcasting: 3\nI0826 16:34:56.750507     869 log.go:172] (0xc0009f5080) Reply frame received for 3\nI0826 16:34:56.750543     869 log.go:172] (0xc0009f5080) (0xc0007ba000) Create stream\nI0826 16:34:56.750551     869 log.go:172] (0xc0009f5080) (0xc0007ba000) Stream added, broadcasting: 5\nI0826 16:34:56.751044     869 log.go:172] (0xc0009f5080) Reply frame received for 5\nI0826 16:34:56.794893     869 log.go:172] (0xc0009f5080) Data frame received for 5\nI0826 16:34:56.794922     869 log.go:172] (0xc0007ba000) (5) Data frame handling\nI0826 16:34:56.794933     869 log.go:172] (0xc0007ba000) (5) Data frame sent\nI0826 16:34:56.794940     869 log.go:172] (0xc0009f5080) Data frame received for 5\nI0826 16:34:56.794946     869 log.go:172] (0xc0007ba000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 16:34:56.794960     869 log.go:172] (0xc0009f5080) Data frame received for 3\nI0826 16:34:56.794966     869 log.go:172] (0xc000520500) (3) Data frame handling\nI0826 16:34:56.794972     869 log.go:172] (0xc000520500) (3) Data frame sent\nI0826 16:34:56.794989     869 log.go:172] (0xc0009f5080) Data frame received for 3\nI0826 16:34:56.794996     869 log.go:172] (0xc000520500) (3) Data frame handling\nI0826 16:34:56.795684     869 log.go:172] (0xc0009f5080) Data frame received for 1\nI0826 16:34:56.795695     869 log.go:172] (0xc000520460) (1) Data frame handling\nI0826 16:34:56.795701     869 log.go:172] (0xc000520460) (1) Data frame sent\nI0826 16:34:56.795709     869 log.go:172] (0xc0009f5080) (0xc000520460) Stream removed, broadcasting: 1\nI0826 16:34:56.795717     869 log.go:172] (0xc0009f5080) Go away received\nI0826 16:34:56.796002     869 log.go:172] (0xc0009f5080) (0xc000520460) Stream removed, broadcasting: 1\nI0826 16:34:56.796018     869 log.go:172] (0xc0009f5080) (0xc000520500) Stream removed, broadcasting: 3\nI0826 16:34:56.796024     869 log.go:172] (0xc0009f5080) (0xc0007ba000) Stream removed, broadcasting: 5\n"
Aug 26 16:34:56.799: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 16:34:56.799: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 16:34:56.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 16:34:57.196: INFO: stderr: "I0826 16:34:56.979829     886 log.go:172] (0xc000aa8000) (0xc0005a0000) Create stream\nI0826 16:34:56.979885     886 log.go:172] (0xc000aa8000) (0xc0005a0000) Stream added, broadcasting: 1\nI0826 16:34:56.982244     886 log.go:172] (0xc000aa8000) Reply frame received for 1\nI0826 16:34:56.982284     886 log.go:172] (0xc000aa8000) (0xc0005a00a0) Create stream\nI0826 16:34:56.982299     886 log.go:172] (0xc000aa8000) (0xc0005a00a0) Stream added, broadcasting: 3\nI0826 16:34:56.983195     886 log.go:172] (0xc000aa8000) Reply frame received for 3\nI0826 16:34:56.983241     886 log.go:172] (0xc000aa8000) (0xc0006232c0) Create stream\nI0826 16:34:56.983270     886 log.go:172] (0xc000aa8000) (0xc0006232c0) Stream added, broadcasting: 5\nI0826 16:34:56.984091     886 log.go:172] (0xc000aa8000) Reply frame received for 5\nI0826 16:34:57.056994     886 log.go:172] (0xc000aa8000) Data frame received for 5\nI0826 16:34:57.057014     886 log.go:172] (0xc0006232c0) (5) Data frame handling\nI0826 16:34:57.057036     886 log.go:172] (0xc0006232c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 16:34:57.180575     886 log.go:172] (0xc000aa8000) Data frame received for 5\nI0826 16:34:57.180605     886 log.go:172] (0xc0006232c0) (5) Data frame handling\nI0826 16:34:57.180625     886 log.go:172] (0xc000aa8000) Data frame received for 3\nI0826 16:34:57.180634     886 log.go:172] (0xc0005a00a0) (3) Data frame handling\nI0826 16:34:57.180642     886 log.go:172] (0xc0005a00a0) (3) Data frame sent\nI0826 16:34:57.180650     886 log.go:172] (0xc000aa8000) Data frame received for 3\nI0826 16:34:57.180656     886 log.go:172] (0xc0005a00a0) (3) Data frame handling\nI0826 16:34:57.185252     886 log.go:172] (0xc000aa8000) Data frame received for 1\nI0826 16:34:57.185272     886 log.go:172] (0xc0005a0000) (1) Data frame handling\nI0826 16:34:57.185285     886 log.go:172] (0xc0005a0000) (1) Data frame sent\nI0826 16:34:57.185299     886 log.go:172] (0xc000aa8000) (0xc0005a0000) Stream removed, broadcasting: 1\nI0826 16:34:57.185319     886 log.go:172] (0xc000aa8000) Go away received\nI0826 16:34:57.185509     886 log.go:172] (0xc000aa8000) (0xc0005a0000) Stream removed, broadcasting: 1\nI0826 16:34:57.185526     886 log.go:172] (0xc000aa8000) (0xc0005a00a0) Stream removed, broadcasting: 3\nI0826 16:34:57.185532     886 log.go:172] (0xc000aa8000) (0xc0006232c0) Stream removed, broadcasting: 5\n"
Aug 26 16:34:57.196: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 16:34:57.196: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 16:34:57.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 16:34:58.404: INFO: stderr: "I0826 16:34:57.603001     906 log.go:172] (0xc0009e40b0) (0xc00091a140) Create stream\nI0826 16:34:57.603080     906 log.go:172] (0xc0009e40b0) (0xc00091a140) Stream added, broadcasting: 1\nI0826 16:34:57.608711     906 log.go:172] (0xc0009e40b0) Reply frame received for 1\nI0826 16:34:57.608849     906 log.go:172] (0xc0009e40b0) (0xc000870000) Create stream\nI0826 16:34:57.608876     906 log.go:172] (0xc0009e40b0) (0xc000870000) Stream added, broadcasting: 3\nI0826 16:34:57.610316     906 log.go:172] (0xc0009e40b0) Reply frame received for 3\nI0826 16:34:57.610450     906 log.go:172] (0xc0009e40b0) (0xc0008ba000) Create stream\nI0826 16:34:57.610519     906 log.go:172] (0xc0009e40b0) (0xc0008ba000) Stream added, broadcasting: 5\nI0826 16:34:57.612016     906 log.go:172] (0xc0009e40b0) Reply frame received for 5\nI0826 16:34:57.667442     906 log.go:172] (0xc0009e40b0) Data frame received for 5\nI0826 16:34:57.667461     906 log.go:172] (0xc0008ba000) (5) Data frame handling\nI0826 16:34:57.667475     906 log.go:172] (0xc0008ba000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 16:34:58.389693     906 log.go:172] (0xc0009e40b0) Data frame received for 3\nI0826 16:34:58.389712     906 log.go:172] (0xc000870000) (3) Data frame handling\nI0826 16:34:58.389731     906 log.go:172] (0xc000870000) (3) Data frame sent\nI0826 16:34:58.393273     906 log.go:172] (0xc0009e40b0) Data frame received for 5\nI0826 16:34:58.393304     906 log.go:172] (0xc0008ba000) (5) Data frame handling\nI0826 16:34:58.393323     906 log.go:172] (0xc0009e40b0) Data frame received for 3\nI0826 16:34:58.393341     906 log.go:172] (0xc000870000) (3) Data frame handling\nI0826 16:34:58.395197     906 log.go:172] (0xc0009e40b0) Data frame received for 1\nI0826 16:34:58.395228     906 log.go:172] (0xc00091a140) (1) Data frame handling\nI0826 16:34:58.395242     906 log.go:172] (0xc00091a140) (1) Data frame sent\nI0826 16:34:58.395267     906 log.go:172] (0xc0009e40b0) (0xc00091a140) Stream removed, broadcasting: 1\nI0826 16:34:58.395282     906 log.go:172] (0xc0009e40b0) Go away received\nI0826 16:34:58.395568     906 log.go:172] (0xc0009e40b0) (0xc00091a140) Stream removed, broadcasting: 1\nI0826 16:34:58.395582     906 log.go:172] (0xc0009e40b0) (0xc000870000) Stream removed, broadcasting: 3\nI0826 16:34:58.395590     906 log.go:172] (0xc0009e40b0) (0xc0008ba000) Stream removed, broadcasting: 5\n"
Aug 26 16:34:58.404: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 16:34:58.404: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 16:34:58.404: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 16:34:58.664: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 26 16:35:08.845: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 16:35:08.845: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 16:35:08.845: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 16:35:10.027: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 26 16:35:10.027: INFO: ss-0  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:22 +0000 UTC  }]
Aug 26 16:35:10.027: INFO: ss-1  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  }]
Aug 26 16:35:10.027: INFO: ss-2  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  }]
Aug 26 16:35:10.027: INFO: 
Aug 26 16:35:10.027: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 16:35:11.847: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 26 16:35:11.847: INFO: ss-0  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:22 +0000 UTC  }]
Aug 26 16:35:11.847: INFO: ss-1  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  }]
Aug 26 16:35:11.847: INFO: ss-2  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  }]
Aug 26 16:35:11.847: INFO: 
Aug 26 16:35:11.847: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 16:35:12.998: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 26 16:35:12.998: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:22 +0000 UTC  }]
Aug 26 16:35:12.998: INFO: ss-1  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  }]
Aug 26 16:35:12.998: INFO: ss-2  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  }]
Aug 26 16:35:12.998: INFO: 
Aug 26 16:35:12.998: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 16:35:14.212: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 26 16:35:14.212: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:22 +0000 UTC  }]
Aug 26 16:35:14.212: INFO: ss-1  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  }]
Aug 26 16:35:14.212: INFO: ss-2  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  }]
Aug 26 16:35:14.212: INFO: 
Aug 26 16:35:14.212: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 16:35:15.332: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 26 16:35:15.332: INFO: ss-0  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:22 +0000 UTC  }]
Aug 26 16:35:15.332: INFO: ss-1  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  }]
Aug 26 16:35:15.332: INFO: ss-2  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  }]
Aug 26 16:35:15.332: INFO: 
Aug 26 16:35:15.332: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 16:35:16.390: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 26 16:35:16.390: INFO: ss-0  kali-worker2  Pending  0s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:22 +0000 UTC  }]
Aug 26 16:35:16.390: INFO: ss-1  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  }]
Aug 26 16:35:16.390: INFO: ss-2  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  }]
Aug 26 16:35:16.390: INFO: 
Aug 26 16:35:16.390: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 26 16:35:17.394: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
Aug 26 16:35:17.394: INFO: ss-1  kali-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  }]
Aug 26 16:35:17.394: INFO: 
Aug 26 16:35:17.394: INFO: StatefulSet ss has not reached scale 0, at 1
Aug 26 16:35:18.762: INFO: POD   NODE         PHASE    GRACE  CONDITIONS
Aug 26 16:35:18.762: INFO: ss-1  kali-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-26 16:34:43 +0000 UTC  }]
Aug 26 16:35:18.762: INFO: 
Aug 26 16:35:18.762: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9528
Aug 26 16:35:19.980: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:35:20.172: INFO: rc: 1
Aug 26 16:35:20.172: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Aug 26 16:35:30.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:35:30.371: INFO: rc: 1
Aug 26 16:35:30.371: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:35:40.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:35:40.627: INFO: rc: 1
Aug 26 16:35:40.627: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:35:50.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:35:50.925: INFO: rc: 1
Aug 26 16:35:50.925: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:36:00.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:36:01.033: INFO: rc: 1
Aug 26 16:36:01.033: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:36:11.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:36:11.138: INFO: rc: 1
Aug 26 16:36:11.138: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:36:21.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:36:21.496: INFO: rc: 1
Aug 26 16:36:21.496: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:36:31.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:36:31.709: INFO: rc: 1
Aug 26 16:36:31.709: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:36:41.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:36:41.883: INFO: rc: 1
Aug 26 16:36:41.883: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:36:51.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:36:51.979: INFO: rc: 1
Aug 26 16:36:51.979: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:37:01.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:37:02.091: INFO: rc: 1
Aug 26 16:37:02.091: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:37:12.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:37:12.714: INFO: rc: 1
Aug 26 16:37:12.714: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:37:22.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:37:22.822: INFO: rc: 1
Aug 26 16:37:22.822: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:37:32.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:37:33.177: INFO: rc: 1
Aug 26 16:37:33.177: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:37:43.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:37:43.329: INFO: rc: 1
Aug 26 16:37:43.329: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:37:53.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:37:53.440: INFO: rc: 1
Aug 26 16:37:53.440: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:38:03.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:38:03.537: INFO: rc: 1
Aug 26 16:38:03.537: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:38:13.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:38:13.987: INFO: rc: 1
Aug 26 16:38:13.987: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:38:23.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:38:24.841: INFO: rc: 1
Aug 26 16:38:24.842: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:38:34.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:38:35.090: INFO: rc: 1
Aug 26 16:38:35.090: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:38:45.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:38:45.202: INFO: rc: 1
Aug 26 16:38:45.202: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:38:55.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:38:55.301: INFO: rc: 1
Aug 26 16:38:55.302: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:39:05.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:39:05.400: INFO: rc: 1
Aug 26 16:39:05.400: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:39:15.400: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:39:15.492: INFO: rc: 1
Aug 26 16:39:15.492: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:39:25.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:39:25.584: INFO: rc: 1
Aug 26 16:39:25.584: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:39:35.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:39:35.674: INFO: rc: 1
Aug 26 16:39:35.674: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:39:45.675: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:39:45.767: INFO: rc: 1
Aug 26 16:39:45.767: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:39:55.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:39:55.860: INFO: rc: 1
Aug 26 16:39:55.860: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:40:05.860: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:40:06.011: INFO: rc: 1
Aug 26 16:40:06.011: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:40:16.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:40:16.112: INFO: rc: 1
Aug 26 16:40:16.112: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Aug 26 16:40:26.112: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:40:26.198: INFO: rc: 1
Aug 26 16:40:26.198: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: 
Aug 26 16:40:26.198: INFO: Scaling statefulset ss to 0
Aug 26 16:40:26.203: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 26 16:40:26.205: INFO: Deleting all statefulset in ns statefulset-9528
Aug 26 16:40:26.206: INFO: Scaling statefulset ss to 0
Aug 26 16:40:26.212: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 16:40:26.214: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:40:26.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9528" for this suite.

• [SLOW TEST:363.479 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":54,"skipped":857,"failed":0}
S
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:40:26.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 26 16:40:36.171: INFO: &Pod{ObjectMeta:{send-events-984eb775-ae80-4dff-af1c-4f46864898d3  events-2952 /api/v1/namespaces/events-2952/pods/send-events-984eb775-ae80-4dff-af1c-4f46864898d3 adb0df89-9bb6-4219-97d5-dcb6ef89e406 1096658 0 2020-08-26 16:40:26 +0000 UTC   map[name:foo time:925118192] map[] [] []  [{e2e.test Update v1 2020-08-26 16:40:26 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 16:40:33 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 56 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q9tfx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q9tfx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q9tfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 16:40:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 16:40:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 16:40:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 16:40:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.182,StartTime:2020-08-26 16:40:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 16:40:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://623d8bb132810d2cd9a7cd57cc35ff9ad1dc6d4425c5d3571a8c1a36da6579a2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.182,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Aug 26 16:40:38.295: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 26 16:40:40.464: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:40:40.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2952" for this suite.

• [SLOW TEST:14.709 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":55,"skipped":858,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:40:40.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 26 16:40:42.009: INFO: Waiting up to 5m0s for pod "pod-b0fc8fa7-e949-4af3-9afd-5fd8341eeb96" in namespace "emptydir-6299" to be "Succeeded or Failed"
Aug 26 16:40:42.194: INFO: Pod "pod-b0fc8fa7-e949-4af3-9afd-5fd8341eeb96": Phase="Pending", Reason="", readiness=false. Elapsed: 185.026544ms
Aug 26 16:40:44.277: INFO: Pod "pod-b0fc8fa7-e949-4af3-9afd-5fd8341eeb96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267970015s
Aug 26 16:40:46.301: INFO: Pod "pod-b0fc8fa7-e949-4af3-9afd-5fd8341eeb96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291647432s
Aug 26 16:40:48.304: INFO: Pod "pod-b0fc8fa7-e949-4af3-9afd-5fd8341eeb96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.294427837s
STEP: Saw pod success
Aug 26 16:40:48.304: INFO: Pod "pod-b0fc8fa7-e949-4af3-9afd-5fd8341eeb96" satisfied condition "Succeeded or Failed"
Aug 26 16:40:48.306: INFO: Trying to get logs from node kali-worker2 pod pod-b0fc8fa7-e949-4af3-9afd-5fd8341eeb96 container test-container: 
STEP: delete the pod
Aug 26 16:40:48.355: INFO: Waiting for pod pod-b0fc8fa7-e949-4af3-9afd-5fd8341eeb96 to disappear
Aug 26 16:40:48.378: INFO: Pod pod-b0fc8fa7-e949-4af3-9afd-5fd8341eeb96 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:40:48.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6299" for this suite.

• [SLOW TEST:7.430 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":859,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:40:48.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 26 16:40:48.444: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d33fe83-5bda-4930-b8e1-7f4fc143e596" in namespace "projected-843" to be "Succeeded or Failed"
Aug 26 16:40:48.477: INFO: Pod "downwardapi-volume-4d33fe83-5bda-4930-b8e1-7f4fc143e596": Phase="Pending", Reason="", readiness=false. Elapsed: 33.447709ms
Aug 26 16:40:50.481: INFO: Pod "downwardapi-volume-4d33fe83-5bda-4930-b8e1-7f4fc143e596": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037301155s
Aug 26 16:40:52.484: INFO: Pod "downwardapi-volume-4d33fe83-5bda-4930-b8e1-7f4fc143e596": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040730791s
STEP: Saw pod success
Aug 26 16:40:52.484: INFO: Pod "downwardapi-volume-4d33fe83-5bda-4930-b8e1-7f4fc143e596" satisfied condition "Succeeded or Failed"
Aug 26 16:40:52.487: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-4d33fe83-5bda-4930-b8e1-7f4fc143e596 container client-container: 
STEP: delete the pod
Aug 26 16:40:52.585: INFO: Waiting for pod downwardapi-volume-4d33fe83-5bda-4930-b8e1-7f4fc143e596 to disappear
Aug 26 16:40:52.587: INFO: Pod downwardapi-volume-4d33fe83-5bda-4930-b8e1-7f4fc143e596 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:40:52.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-843" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":883,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:40:52.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Aug 26 16:40:52.920: INFO: Created pod &Pod{ObjectMeta:{dns-4599  dns-4599 /api/v1/namespaces/dns-4599/pods/dns-4599 56522adf-a775-4351-b6c2-cb6f2eb08e5e 1096777 0 2020-08-26 16:40:52 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2020-08-26 16:40:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-69v2w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-69v2w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-69v2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 16:40:52.922: INFO: The status of Pod dns-4599 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 16:40:54.926: INFO: The status of Pod dns-4599 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 16:40:56.926: INFO: The status of Pod dns-4599 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 16:40:58.926: INFO: The status of Pod dns-4599 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
Aug 26 16:40:58.926: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4599 PodName:dns-4599 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 16:40:58.926: INFO: >>> kubeConfig: /root/.kube/config
I0826 16:40:58.956903       7 log.go:172] (0xc00155a4d0) (0xc0021cfe00) Create stream
I0826 16:40:58.956932       7 log.go:172] (0xc00155a4d0) (0xc0021cfe00) Stream added, broadcasting: 1
I0826 16:40:58.958644       7 log.go:172] (0xc00155a4d0) Reply frame received for 1
I0826 16:40:58.958675       7 log.go:172] (0xc00155a4d0) (0xc0030c99a0) Create stream
I0826 16:40:58.958687       7 log.go:172] (0xc00155a4d0) (0xc0030c99a0) Stream added, broadcasting: 3
I0826 16:40:58.959350       7 log.go:172] (0xc00155a4d0) Reply frame received for 3
I0826 16:40:58.959380       7 log.go:172] (0xc00155a4d0) (0xc002fe0000) Create stream
I0826 16:40:58.959391       7 log.go:172] (0xc00155a4d0) (0xc002fe0000) Stream added, broadcasting: 5
I0826 16:40:58.960036       7 log.go:172] (0xc00155a4d0) Reply frame received for 5
I0826 16:40:59.042010       7 log.go:172] (0xc00155a4d0) Data frame received for 3
I0826 16:40:59.042035       7 log.go:172] (0xc0030c99a0) (3) Data frame handling
I0826 16:40:59.042052       7 log.go:172] (0xc0030c99a0) (3) Data frame sent
I0826 16:40:59.044413       7 log.go:172] (0xc00155a4d0) Data frame received for 3
I0826 16:40:59.044427       7 log.go:172] (0xc0030c99a0) (3) Data frame handling
I0826 16:40:59.044599       7 log.go:172] (0xc00155a4d0) Data frame received for 5
I0826 16:40:59.044612       7 log.go:172] (0xc002fe0000) (5) Data frame handling
I0826 16:40:59.046212       7 log.go:172] (0xc00155a4d0) Data frame received for 1
I0826 16:40:59.046235       7 log.go:172] (0xc0021cfe00) (1) Data frame handling
I0826 16:40:59.046246       7 log.go:172] (0xc0021cfe00) (1) Data frame sent
I0826 16:40:59.046258       7 log.go:172] (0xc00155a4d0) (0xc0021cfe00) Stream removed, broadcasting: 1
I0826 16:40:59.046271       7 log.go:172] (0xc00155a4d0) Go away received
I0826 16:40:59.046566       7 log.go:172] (0xc00155a4d0) (0xc0021cfe00) Stream removed, broadcasting: 1
I0826 16:40:59.046582       7 log.go:172] (0xc00155a4d0) (0xc0030c99a0) Stream removed, broadcasting: 3
I0826 16:40:59.046594       7 log.go:172] (0xc00155a4d0) (0xc002fe0000) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Aug 26 16:40:59.046: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4599 PodName:dns-4599 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 16:40:59.046: INFO: >>> kubeConfig: /root/.kube/config
I0826 16:40:59.074277       7 log.go:172] (0xc001ca24d0) (0xc001b24460) Create stream
I0826 16:40:59.074298       7 log.go:172] (0xc001ca24d0) (0xc001b24460) Stream added, broadcasting: 1
I0826 16:40:59.075886       7 log.go:172] (0xc001ca24d0) Reply frame received for 1
I0826 16:40:59.075926       7 log.go:172] (0xc001ca24d0) (0xc0030c9ae0) Create stream
I0826 16:40:59.075944       7 log.go:172] (0xc001ca24d0) (0xc0030c9ae0) Stream added, broadcasting: 3
I0826 16:40:59.076552       7 log.go:172] (0xc001ca24d0) Reply frame received for 3
I0826 16:40:59.076577       7 log.go:172] (0xc001ca24d0) (0xc0030c9b80) Create stream
I0826 16:40:59.076588       7 log.go:172] (0xc001ca24d0) (0xc0030c9b80) Stream added, broadcasting: 5
I0826 16:40:59.077319       7 log.go:172] (0xc001ca24d0) Reply frame received for 5
I0826 16:40:59.152277       7 log.go:172] (0xc001ca24d0) Data frame received for 3
I0826 16:40:59.152344       7 log.go:172] (0xc0030c9ae0) (3) Data frame handling
I0826 16:40:59.152372       7 log.go:172] (0xc0030c9ae0) (3) Data frame sent
I0826 16:40:59.154359       7 log.go:172] (0xc001ca24d0) Data frame received for 3
I0826 16:40:59.154382       7 log.go:172] (0xc0030c9ae0) (3) Data frame handling
I0826 16:40:59.154590       7 log.go:172] (0xc001ca24d0) Data frame received for 5
I0826 16:40:59.154623       7 log.go:172] (0xc0030c9b80) (5) Data frame handling
I0826 16:40:59.155556       7 log.go:172] (0xc001ca24d0) Data frame received for 1
I0826 16:40:59.155573       7 log.go:172] (0xc001b24460) (1) Data frame handling
I0826 16:40:59.155584       7 log.go:172] (0xc001b24460) (1) Data frame sent
I0826 16:40:59.155592       7 log.go:172] (0xc001ca24d0) (0xc001b24460) Stream removed, broadcasting: 1
I0826 16:40:59.155655       7 log.go:172] (0xc001ca24d0) (0xc001b24460) Stream removed, broadcasting: 1
I0826 16:40:59.155666       7 log.go:172] (0xc001ca24d0) (0xc0030c9ae0) Stream removed, broadcasting: 3
I0826 16:40:59.155877       7 log.go:172] (0xc001ca24d0) Go away received
I0826 16:40:59.155956       7 log.go:172] (0xc001ca24d0) (0xc0030c9b80) Stream removed, broadcasting: 5
Aug 26 16:40:59.156: INFO: Deleting pod dns-4599...
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:40:59.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4599" for this suite.

• [SLOW TEST:6.676 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":58,"skipped":892,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:40:59.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 26 16:40:59.973: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13847e83-14fa-42f8-9166-201f1bdf6754" in namespace "downward-api-6158" to be "Succeeded or Failed"
Aug 26 16:41:00.002: INFO: Pod "downwardapi-volume-13847e83-14fa-42f8-9166-201f1bdf6754": Phase="Pending", Reason="", readiness=false. Elapsed: 28.283986ms
Aug 26 16:41:02.039: INFO: Pod "downwardapi-volume-13847e83-14fa-42f8-9166-201f1bdf6754": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065446708s
Aug 26 16:41:04.043: INFO: Pod "downwardapi-volume-13847e83-14fa-42f8-9166-201f1bdf6754": Phase="Running", Reason="", readiness=true. Elapsed: 4.069057193s
Aug 26 16:41:06.046: INFO: Pod "downwardapi-volume-13847e83-14fa-42f8-9166-201f1bdf6754": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072753059s
STEP: Saw pod success
Aug 26 16:41:06.046: INFO: Pod "downwardapi-volume-13847e83-14fa-42f8-9166-201f1bdf6754" satisfied condition "Succeeded or Failed"
Aug 26 16:41:06.050: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-13847e83-14fa-42f8-9166-201f1bdf6754 container client-container: 
STEP: delete the pod
Aug 26 16:41:06.087: INFO: Waiting for pod downwardapi-volume-13847e83-14fa-42f8-9166-201f1bdf6754 to disappear
Aug 26 16:41:06.123: INFO: Pod downwardapi-volume-13847e83-14fa-42f8-9166-201f1bdf6754 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:41:06.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6158" for this suite.

• [SLOW TEST:6.858 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":915,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:41:06.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 16:41:06.220: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 26 16:41:08.396: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:41:09.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7665" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":60,"skipped":938,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:41:09.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 16:41:10.041: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:41:11.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5867" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":61,"skipped":976,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:41:11.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 26 16:41:12.041: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f3e98c2-0d51-4536-adee-a9cb9958ce22" in namespace "projected-6498" to be "Succeeded or Failed"
Aug 26 16:41:12.057: INFO: Pod "downwardapi-volume-9f3e98c2-0d51-4536-adee-a9cb9958ce22": Phase="Pending", Reason="", readiness=false. Elapsed: 15.967703ms
Aug 26 16:41:14.362: INFO: Pod "downwardapi-volume-9f3e98c2-0d51-4536-adee-a9cb9958ce22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321137054s
Aug 26 16:41:16.907: INFO: Pod "downwardapi-volume-9f3e98c2-0d51-4536-adee-a9cb9958ce22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.866447823s
Aug 26 16:41:18.922: INFO: Pod "downwardapi-volume-9f3e98c2-0d51-4536-adee-a9cb9958ce22": Phase="Running", Reason="", readiness=true. Elapsed: 6.880970249s
Aug 26 16:41:20.925: INFO: Pod "downwardapi-volume-9f3e98c2-0d51-4536-adee-a9cb9958ce22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.884011311s
STEP: Saw pod success
Aug 26 16:41:20.925: INFO: Pod "downwardapi-volume-9f3e98c2-0d51-4536-adee-a9cb9958ce22" satisfied condition "Succeeded or Failed"
Aug 26 16:41:20.927: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-9f3e98c2-0d51-4536-adee-a9cb9958ce22 container client-container: 
STEP: delete the pod
Aug 26 16:41:20.974: INFO: Waiting for pod downwardapi-volume-9f3e98c2-0d51-4536-adee-a9cb9958ce22 to disappear
Aug 26 16:41:21.086: INFO: Pod downwardapi-volume-9f3e98c2-0d51-4536-adee-a9cb9958ce22 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:41:21.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6498" for this suite.

• [SLOW TEST:9.588 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":62,"skipped":1030,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:41:21.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 26 16:41:21.395: INFO: Waiting up to 5m0s for pod "pod-37e22eec-6dbe-4587-8f07-9b16afade40b" in namespace "emptydir-9348" to be "Succeeded or Failed"
Aug 26 16:41:21.479: INFO: Pod "pod-37e22eec-6dbe-4587-8f07-9b16afade40b": Phase="Pending", Reason="", readiness=false. Elapsed: 83.56996ms
Aug 26 16:41:23.571: INFO: Pod "pod-37e22eec-6dbe-4587-8f07-9b16afade40b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175995015s
Aug 26 16:41:25.710: INFO: Pod "pod-37e22eec-6dbe-4587-8f07-9b16afade40b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314564207s
Aug 26 16:41:28.081: INFO: Pod "pod-37e22eec-6dbe-4587-8f07-9b16afade40b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.685649643s
Aug 26 16:41:30.188: INFO: Pod "pod-37e22eec-6dbe-4587-8f07-9b16afade40b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.792820245s
STEP: Saw pod success
Aug 26 16:41:30.188: INFO: Pod "pod-37e22eec-6dbe-4587-8f07-9b16afade40b" satisfied condition "Succeeded or Failed"
Aug 26 16:41:30.388: INFO: Trying to get logs from node kali-worker2 pod pod-37e22eec-6dbe-4587-8f07-9b16afade40b container test-container: 
STEP: delete the pod
Aug 26 16:41:30.762: INFO: Waiting for pod pod-37e22eec-6dbe-4587-8f07-9b16afade40b to disappear
Aug 26 16:41:30.836: INFO: Pod pod-37e22eec-6dbe-4587-8f07-9b16afade40b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:41:30.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9348" for this suite.

• [SLOW TEST:9.712 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1034,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:41:30.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 16:41:33.711: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 16:41:35.721: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:41:38.304: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:41:40.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:41:41.733: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:41:43.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734056893, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 16:41:47.213: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 16:41:47.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:41:48.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9891" for this suite.
STEP: Destroying namespace "webhook-9891-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.685 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":64,"skipped":1041,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:41:48.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl replace
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 26 16:41:48.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-4562'
Aug 26 16:41:48.669: INFO: stderr: ""
Aug 26 16:41:48.669: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Aug 26 16:41:53.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-4562 -o json'
Aug 26 16:41:53.827: INFO: stderr: ""
Aug 26 16:41:53.827: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-26T16:41:48Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-08-26T16:41:48Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:phase\": {},\n                        \"f:podIP\": {},\n                        \"f:podIPs\": {\n                            \".\": {},\n                            \"k:{\\\"ip\\\":\\\"10.244.2.248\\\"}\": {\n                                \".\": {},\n                                \"f:ip\": {}\n                            }\n                        },\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-08-26T16:41:53Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-4562\",\n        \"resourceVersion\": \"1097204\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-4562/pods/e2e-test-httpd-pod\",\n        \"uid\": \"a636de92-6946-41ec-b2c5-4e8977c58c7c\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-5jj44\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"kali-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-5jj44\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-5jj44\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-26T16:41:48Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-26T16:41:52Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-26T16:41:52Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-26T16:41:48Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://e6d333f8d4c53d0a8a3c6568fff083f102a6f5bbb42cb13622b6ae884b7432d1\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-26T16:41:52Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.13\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.248\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.248\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-26T16:41:48Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 26 16:41:53.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4562'
Aug 26 16:41:54.253: INFO: stderr: ""
Aug 26 16:41:54.253: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Aug 26 16:41:54.305: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4562'
Aug 26 16:42:07.833: INFO: stderr: ""
Aug 26 16:42:07.833: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:42:07.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4562" for this suite.

• [SLOW TEST:19.312 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":275,"completed":65,"skipped":1050,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:42:07.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:42:08.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5122" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":66,"skipped":1070,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:42:08.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-3153
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-3153
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3153
Aug 26 16:42:08.145: INFO: Found 0 stateful pods, waiting for 1
Aug 26 16:42:18.149: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 26 16:42:18.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 16:42:18.623: INFO: stderr: "I0826 16:42:18.422807    1607 log.go:172] (0xc0000ce420) (0xc0009e2d20) Create stream\nI0826 16:42:18.422858    1607 log.go:172] (0xc0000ce420) (0xc0009e2d20) Stream added, broadcasting: 1\nI0826 16:42:18.424313    1607 log.go:172] (0xc0000ce420) Reply frame received for 1\nI0826 16:42:18.424348    1607 log.go:172] (0xc0000ce420) (0xc000ab80a0) Create stream\nI0826 16:42:18.424357    1607 log.go:172] (0xc0000ce420) (0xc000ab80a0) Stream added, broadcasting: 3\nI0826 16:42:18.425144    1607 log.go:172] (0xc0000ce420) Reply frame received for 3\nI0826 16:42:18.425175    1607 log.go:172] (0xc0000ce420) (0xc0009e2dc0) Create stream\nI0826 16:42:18.425188    1607 log.go:172] (0xc0000ce420) (0xc0009e2dc0) Stream added, broadcasting: 5\nI0826 16:42:18.425853    1607 log.go:172] (0xc0000ce420) Reply frame received for 5\nI0826 16:42:18.497345    1607 log.go:172] (0xc0000ce420) Data frame received for 5\nI0826 16:42:18.497362    1607 log.go:172] (0xc0009e2dc0) (5) Data frame handling\nI0826 16:42:18.497373    1607 log.go:172] (0xc0009e2dc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 16:42:18.611450    1607 log.go:172] (0xc0000ce420) Data frame received for 3\nI0826 16:42:18.611480    1607 log.go:172] (0xc000ab80a0) (3) Data frame handling\nI0826 16:42:18.611509    1607 log.go:172] (0xc000ab80a0) (3) Data frame sent\nI0826 16:42:18.611598    1607 log.go:172] (0xc0000ce420) Data frame received for 3\nI0826 16:42:18.611616    1607 log.go:172] (0xc000ab80a0) (3) Data frame handling\nI0826 16:42:18.611921    1607 log.go:172] (0xc0000ce420) Data frame received for 5\nI0826 16:42:18.611935    1607 log.go:172] (0xc0009e2dc0) (5) Data frame handling\nI0826 16:42:18.613513    1607 log.go:172] (0xc0000ce420) Data frame received for 1\nI0826 16:42:18.613539    1607 log.go:172] (0xc0009e2d20) (1) Data frame handling\nI0826 16:42:18.613555    1607 log.go:172] (0xc0009e2d20) (1) Data frame sent\nI0826 16:42:18.613591    1607 log.go:172] (0xc0000ce420) (0xc0009e2d20) Stream removed, broadcasting: 1\nI0826 16:42:18.613648    1607 log.go:172] (0xc0000ce420) Go away received\nI0826 16:42:18.613995    1607 log.go:172] (0xc0000ce420) (0xc0009e2d20) Stream removed, broadcasting: 1\nI0826 16:42:18.614009    1607 log.go:172] (0xc0000ce420) (0xc000ab80a0) Stream removed, broadcasting: 3\nI0826 16:42:18.614017    1607 log.go:172] (0xc0000ce420) (0xc0009e2dc0) Stream removed, broadcasting: 5\n"
Aug 26 16:42:18.623: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 16:42:18.623: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 16:42:18.626: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 26 16:42:28.630: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 16:42:28.630: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 16:42:28.782: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999503s
Aug 26 16:42:29.805: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.987141812s
Aug 26 16:42:30.809: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.964013358s
Aug 26 16:42:32.378: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.959635509s
Aug 26 16:42:33.382: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.390942866s
Aug 26 16:42:34.386: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.386780213s
Aug 26 16:42:36.242: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.382830734s
Aug 26 16:42:37.255: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.527197649s
Aug 26 16:42:38.403: INFO: Verifying statefulset ss doesn't scale past 1 for another 514.382797ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3153
Aug 26 16:42:39.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:42:40.399: INFO: stderr: "I0826 16:42:40.319062    1626 log.go:172] (0xc0000e2630) (0xc000506a00) Create stream\nI0826 16:42:40.319110    1626 log.go:172] (0xc0000e2630) (0xc000506a00) Stream added, broadcasting: 1\nI0826 16:42:40.321062    1626 log.go:172] (0xc0000e2630) Reply frame received for 1\nI0826 16:42:40.321095    1626 log.go:172] (0xc0000e2630) (0xc000970000) Create stream\nI0826 16:42:40.321106    1626 log.go:172] (0xc0000e2630) (0xc000970000) Stream added, broadcasting: 3\nI0826 16:42:40.321871    1626 log.go:172] (0xc0000e2630) Reply frame received for 3\nI0826 16:42:40.321907    1626 log.go:172] (0xc0000e2630) (0xc000b7a000) Create stream\nI0826 16:42:40.321919    1626 log.go:172] (0xc0000e2630) (0xc000b7a000) Stream added, broadcasting: 5\nI0826 16:42:40.322661    1626 log.go:172] (0xc0000e2630) Reply frame received for 5\nI0826 16:42:40.389210    1626 log.go:172] (0xc0000e2630) Data frame received for 5\nI0826 16:42:40.389238    1626 log.go:172] (0xc000b7a000) (5) Data frame handling\nI0826 16:42:40.389248    1626 log.go:172] (0xc000b7a000) (5) Data frame sent\nI0826 16:42:40.389254    1626 log.go:172] (0xc0000e2630) Data frame received for 5\nI0826 16:42:40.389260    1626 log.go:172] (0xc000b7a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 16:42:40.389283    1626 log.go:172] (0xc0000e2630) Data frame received for 3\nI0826 16:42:40.389291    1626 log.go:172] (0xc000970000) (3) Data frame handling\nI0826 16:42:40.389299    1626 log.go:172] (0xc000970000) (3) Data frame sent\nI0826 16:42:40.389310    1626 log.go:172] (0xc0000e2630) Data frame received for 3\nI0826 16:42:40.389315    1626 log.go:172] (0xc000970000) (3) Data frame handling\nI0826 16:42:40.390252    1626 log.go:172] (0xc0000e2630) Data frame received for 1\nI0826 16:42:40.390269    1626 log.go:172] (0xc000506a00) (1) Data frame handling\nI0826 16:42:40.390282    1626 log.go:172] (0xc000506a00) (1) Data frame sent\nI0826 16:42:40.390291    1626 log.go:172] (0xc0000e2630) (0xc000506a00) Stream removed, broadcasting: 1\nI0826 16:42:40.390300    1626 log.go:172] (0xc0000e2630) Go away received\nI0826 16:42:40.390599    1626 log.go:172] (0xc0000e2630) (0xc000506a00) Stream removed, broadcasting: 1\nI0826 16:42:40.390614    1626 log.go:172] (0xc0000e2630) (0xc000970000) Stream removed, broadcasting: 3\nI0826 16:42:40.390622    1626 log.go:172] (0xc0000e2630) (0xc000b7a000) Stream removed, broadcasting: 5\n"
Aug 26 16:42:40.399: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 16:42:40.399: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 16:42:40.578: INFO: Found 1 stateful pods, waiting for 3
Aug 26 16:42:50.583: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 16:42:50.583: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 16:42:50.583: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 26 16:43:00.582: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 16:43:00.582: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 16:43:00.582: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 26 16:43:00.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 16:43:00.821: INFO: stderr: "I0826 16:43:00.727830    1647 log.go:172] (0xc0000e51e0) (0xc0009ae460) Create stream\nI0826 16:43:00.727897    1647 log.go:172] (0xc0000e51e0) (0xc0009ae460) Stream added, broadcasting: 1\nI0826 16:43:00.730577    1647 log.go:172] (0xc0000e51e0) Reply frame received for 1\nI0826 16:43:00.730658    1647 log.go:172] (0xc0000e51e0) (0xc0001faa00) Create stream\nI0826 16:43:00.730681    1647 log.go:172] (0xc0000e51e0) (0xc0001faa00) Stream added, broadcasting: 3\nI0826 16:43:00.731632    1647 log.go:172] (0xc0000e51e0) Reply frame received for 3\nI0826 16:43:00.731685    1647 log.go:172] (0xc0000e51e0) (0xc0005d55e0) Create stream\nI0826 16:43:00.731704    1647 log.go:172] (0xc0000e51e0) (0xc0005d55e0) Stream added, broadcasting: 5\nI0826 16:43:00.732808    1647 log.go:172] (0xc0000e51e0) Reply frame received for 5\nI0826 16:43:00.812534    1647 log.go:172] (0xc0000e51e0) Data frame received for 5\nI0826 16:43:00.812587    1647 log.go:172] (0xc0005d55e0) (5) Data frame handling\nI0826 16:43:00.812604    1647 log.go:172] (0xc0005d55e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 16:43:00.812625    1647 log.go:172] (0xc0000e51e0) Data frame received for 3\nI0826 16:43:00.812634    1647 log.go:172] (0xc0001faa00) (3) Data frame handling\nI0826 16:43:00.812645    1647 log.go:172] (0xc0001faa00) (3) Data frame sent\nI0826 16:43:00.812654    1647 log.go:172] (0xc0000e51e0) Data frame received for 3\nI0826 16:43:00.812662    1647 log.go:172] (0xc0001faa00) (3) Data frame handling\nI0826 16:43:00.812710    1647 log.go:172] (0xc0000e51e0) Data frame received for 5\nI0826 16:43:00.812890    1647 log.go:172] (0xc0005d55e0) (5) Data frame handling\nI0826 16:43:00.814670    1647 log.go:172] (0xc0000e51e0) Data frame received for 1\nI0826 16:43:00.814686    1647 log.go:172] (0xc0009ae460) (1) Data frame handling\nI0826 16:43:00.814695    1647 log.go:172] (0xc0009ae460) (1) Data frame sent\nI0826 16:43:00.814706    1647 log.go:172] (0xc0000e51e0) (0xc0009ae460) Stream removed, broadcasting: 1\nI0826 16:43:00.814717    1647 log.go:172] (0xc0000e51e0) Go away received\nI0826 16:43:00.815203    1647 log.go:172] (0xc0000e51e0) (0xc0009ae460) Stream removed, broadcasting: 1\nI0826 16:43:00.815233    1647 log.go:172] (0xc0000e51e0) (0xc0001faa00) Stream removed, broadcasting: 3\nI0826 16:43:00.815244    1647 log.go:172] (0xc0000e51e0) (0xc0005d55e0) Stream removed, broadcasting: 5\n"
Aug 26 16:43:00.822: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 16:43:00.822: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 16:43:00.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 16:43:01.102: INFO: stderr: "I0826 16:43:00.954724    1667 log.go:172] (0xc000a75810) (0xc0008deaa0) Create stream\nI0826 16:43:00.954782    1667 log.go:172] (0xc000a75810) (0xc0008deaa0) Stream added, broadcasting: 1\nI0826 16:43:00.959992    1667 log.go:172] (0xc000a75810) Reply frame received for 1\nI0826 16:43:00.960063    1667 log.go:172] (0xc000a75810) (0xc000609540) Create stream\nI0826 16:43:00.960084    1667 log.go:172] (0xc000a75810) (0xc000609540) Stream added, broadcasting: 3\nI0826 16:43:00.961094    1667 log.go:172] (0xc000a75810) Reply frame received for 3\nI0826 16:43:00.961143    1667 log.go:172] (0xc000a75810) (0xc000448960) Create stream\nI0826 16:43:00.961152    1667 log.go:172] (0xc000a75810) (0xc000448960) Stream added, broadcasting: 5\nI0826 16:43:00.962048    1667 log.go:172] (0xc000a75810) Reply frame received for 5\nI0826 16:43:01.042636    1667 log.go:172] (0xc000a75810) Data frame received for 5\nI0826 16:43:01.042662    1667 log.go:172] (0xc000448960) (5) Data frame handling\nI0826 16:43:01.042678    1667 log.go:172] (0xc000448960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 16:43:01.087513    1667 log.go:172] (0xc000a75810) Data frame received for 3\nI0826 16:43:01.087542    1667 log.go:172] (0xc000609540) (3) Data frame handling\nI0826 16:43:01.087584    1667 log.go:172] (0xc000609540) (3) Data frame sent\nI0826 16:43:01.087708    1667 log.go:172] (0xc000a75810) Data frame received for 5\nI0826 16:43:01.087733    1667 log.go:172] (0xc000448960) (5) Data frame handling\nI0826 16:43:01.087805    1667 log.go:172] (0xc000a75810) Data frame received for 3\nI0826 16:43:01.087818    1667 log.go:172] (0xc000609540) (3) Data frame handling\nI0826 16:43:01.089870    1667 log.go:172] (0xc000a75810) Data frame received for 1\nI0826 16:43:01.089941    1667 log.go:172] (0xc0008deaa0) (1) Data frame handling\nI0826 16:43:01.090017    1667 log.go:172] (0xc0008deaa0) (1) Data frame sent\nI0826 16:43:01.090039    1667 log.go:172] (0xc000a75810) (0xc0008deaa0) Stream removed, broadcasting: 1\nI0826 16:43:01.090228    1667 log.go:172] (0xc000a75810) Go away received\nI0826 16:43:01.090347    1667 log.go:172] (0xc000a75810) (0xc0008deaa0) Stream removed, broadcasting: 1\nI0826 16:43:01.090360    1667 log.go:172] (0xc000a75810) (0xc000609540) Stream removed, broadcasting: 3\nI0826 16:43:01.090365    1667 log.go:172] (0xc000a75810) (0xc000448960) Stream removed, broadcasting: 5\n"
Aug 26 16:43:01.102: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 16:43:01.102: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 16:43:01.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 16:43:01.352: INFO: stderr: "I0826 16:43:01.246315    1687 log.go:172] (0xc00059ebb0) (0xc0009d6320) Create stream\nI0826 16:43:01.246372    1687 log.go:172] (0xc00059ebb0) (0xc0009d6320) Stream added, broadcasting: 1\nI0826 16:43:01.253083    1687 log.go:172] (0xc00059ebb0) Reply frame received for 1\nI0826 16:43:01.253139    1687 log.go:172] (0xc00059ebb0) (0xc0007bb4a0) Create stream\nI0826 16:43:01.253161    1687 log.go:172] (0xc00059ebb0) (0xc0007bb4a0) Stream added, broadcasting: 3\nI0826 16:43:01.254054    1687 log.go:172] (0xc00059ebb0) Reply frame received for 3\nI0826 16:43:01.254085    1687 log.go:172] (0xc00059ebb0) (0xc0009d63c0) Create stream\nI0826 16:43:01.254093    1687 log.go:172] (0xc00059ebb0) (0xc0009d63c0) Stream added, broadcasting: 5\nI0826 16:43:01.254987    1687 log.go:172] (0xc00059ebb0) Reply frame received for 5\nI0826 16:43:01.311093    1687 log.go:172] (0xc00059ebb0) Data frame received for 5\nI0826 16:43:01.311127    1687 log.go:172] (0xc0009d63c0) (5) Data frame handling\nI0826 16:43:01.311151    1687 log.go:172] (0xc0009d63c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 16:43:01.339714    1687 log.go:172] (0xc00059ebb0) Data frame received for 3\nI0826 16:43:01.339748    1687 log.go:172] (0xc0007bb4a0) (3) Data frame handling\nI0826 16:43:01.339762    1687 log.go:172] (0xc0007bb4a0) (3) Data frame sent\nI0826 16:43:01.339773    1687 log.go:172] (0xc00059ebb0) Data frame received for 3\nI0826 16:43:01.339782    1687 log.go:172] (0xc0007bb4a0) (3) Data frame handling\nI0826 16:43:01.340093    1687 log.go:172] (0xc00059ebb0) Data frame received for 5\nI0826 16:43:01.340172    1687 log.go:172] (0xc0009d63c0) (5) Data frame handling\nI0826 16:43:01.342992    1687 log.go:172] (0xc00059ebb0) Data frame received for 1\nI0826 16:43:01.343023    1687 log.go:172] (0xc0009d6320) (1) Data frame handling\nI0826 16:43:01.343036    1687 log.go:172] (0xc0009d6320) (1) Data frame sent\nI0826 16:43:01.343049    1687 log.go:172] (0xc00059ebb0) (0xc0009d6320) Stream removed, broadcasting: 1\nI0826 16:43:01.343101    1687 log.go:172] (0xc00059ebb0) Go away received\nI0826 16:43:01.343788    1687 log.go:172] (0xc00059ebb0) (0xc0009d6320) Stream removed, broadcasting: 1\nI0826 16:43:01.343806    1687 log.go:172] (0xc00059ebb0) (0xc0007bb4a0) Stream removed, broadcasting: 3\nI0826 16:43:01.343815    1687 log.go:172] (0xc00059ebb0) (0xc0009d63c0) Stream removed, broadcasting: 5\n"
Aug 26 16:43:01.352: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 16:43:01.352: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 16:43:01.353: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 16:43:01.357: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Aug 26 16:43:11.415: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 16:43:11.415: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 16:43:11.415: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 26 16:43:11.506: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999582s
Aug 26 16:43:12.511: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.913657598s
Aug 26 16:43:13.515: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.90852286s
Aug 26 16:43:14.794: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.904815994s
Aug 26 16:43:16.088: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.625505654s
Aug 26 16:43:17.091: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.332300451s
Aug 26 16:43:18.095: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.328866969s
Aug 26 16:43:19.099: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.324723355s
Aug 26 16:43:20.160: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.320802399s
Aug 26 16:43:21.165: INFO: Verifying statefulset ss doesn't scale past 3 for another 259.941238ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3153
Aug 26 16:43:22.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:43:23.951: INFO: stderr: "I0826 16:43:23.843535    1709 log.go:172] (0xc0007acdc0) (0xc0006d83c0) Create stream\nI0826 16:43:23.843597    1709 log.go:172] (0xc0007acdc0) (0xc0006d83c0) Stream added, broadcasting: 1\nI0826 16:43:23.847380    1709 log.go:172] (0xc0007acdc0) Reply frame received for 1\nI0826 16:43:23.847432    1709 log.go:172] (0xc0007acdc0) (0xc0006f5540) Create stream\nI0826 16:43:23.847449    1709 log.go:172] (0xc0007acdc0) (0xc0006f5540) Stream added, broadcasting: 3\nI0826 16:43:23.848231    1709 log.go:172] (0xc0007acdc0) Reply frame received for 3\nI0826 16:43:23.848268    1709 log.go:172] (0xc0007acdc0) (0xc000562960) Create stream\nI0826 16:43:23.848277    1709 log.go:172] (0xc0007acdc0) (0xc000562960) Stream added, broadcasting: 5\nI0826 16:43:23.849135    1709 log.go:172] (0xc0007acdc0) Reply frame received for 5\nI0826 16:43:23.942041    1709 log.go:172] (0xc0007acdc0) Data frame received for 5\nI0826 16:43:23.942065    1709 log.go:172] (0xc000562960) (5) Data frame handling\nI0826 16:43:23.942083    1709 log.go:172] (0xc000562960) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 16:43:23.942111    1709 log.go:172] (0xc0007acdc0) Data frame received for 3\nI0826 16:43:23.942153    1709 log.go:172] (0xc0006f5540) (3) Data frame handling\nI0826 16:43:23.942178    1709 log.go:172] (0xc0006f5540) (3) Data frame sent\nI0826 16:43:23.942191    1709 log.go:172] (0xc0007acdc0) Data frame received for 3\nI0826 16:43:23.942200    1709 log.go:172] (0xc0006f5540) (3) Data frame handling\nI0826 16:43:23.942217    1709 log.go:172] (0xc0007acdc0) Data frame received for 5\nI0826 16:43:23.942230    1709 log.go:172] (0xc000562960) (5) Data frame handling\nI0826 16:43:23.943361    1709 log.go:172] (0xc0007acdc0) Data frame received for 1\nI0826 16:43:23.943383    1709 log.go:172] (0xc0006d83c0) (1) Data frame handling\nI0826 16:43:23.943392    1709 log.go:172] (0xc0006d83c0) (1) Data frame sent\nI0826 16:43:23.943404    1709 log.go:172] (0xc0007acdc0) (0xc0006d83c0) Stream removed, broadcasting: 1\nI0826 16:43:23.943422    1709 log.go:172] (0xc0007acdc0) Go away received\nI0826 16:43:23.943763    1709 log.go:172] (0xc0007acdc0) (0xc0006d83c0) Stream removed, broadcasting: 1\nI0826 16:43:23.943779    1709 log.go:172] (0xc0007acdc0) (0xc0006f5540) Stream removed, broadcasting: 3\nI0826 16:43:23.943801    1709 log.go:172] (0xc0007acdc0) (0xc000562960) Stream removed, broadcasting: 5\n"
Aug 26 16:43:23.951: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 16:43:23.951: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 16:43:23.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:43:24.468: INFO: stderr: "I0826 16:43:24.403912    1729 log.go:172] (0xc00003a4d0) (0xc0007d8320) Create stream\nI0826 16:43:24.403968    1729 log.go:172] (0xc00003a4d0) (0xc0007d8320) Stream added, broadcasting: 1\nI0826 16:43:24.405689    1729 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0826 16:43:24.405717    1729 log.go:172] (0xc00003a4d0) (0xc0007ec000) Create stream\nI0826 16:43:24.405724    1729 log.go:172] (0xc00003a4d0) (0xc0007ec000) Stream added, broadcasting: 3\nI0826 16:43:24.406344    1729 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0826 16:43:24.406365    1729 log.go:172] (0xc00003a4d0) (0xc0007d83c0) Create stream\nI0826 16:43:24.406372    1729 log.go:172] (0xc00003a4d0) (0xc0007d83c0) Stream added, broadcasting: 5\nI0826 16:43:24.406909    1729 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0826 16:43:24.463138    1729 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0826 16:43:24.463168    1729 log.go:172] (0xc0007ec000) (3) Data frame handling\nI0826 16:43:24.463191    1729 log.go:172] (0xc0007ec000) (3) Data frame sent\nI0826 16:43:24.463198    1729 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0826 16:43:24.463203    1729 log.go:172] (0xc0007ec000) (3) Data frame handling\nI0826 16:43:24.463223    1729 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0826 16:43:24.463228    1729 log.go:172] (0xc0007d83c0) (5) Data frame handling\nI0826 16:43:24.463234    1729 log.go:172] (0xc0007d83c0) (5) Data frame sent\nI0826 16:43:24.463241    1729 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0826 16:43:24.463248    1729 log.go:172] (0xc0007d83c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 16:43:24.464251    1729 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0826 16:43:24.464266    1729 log.go:172] (0xc0007d8320) (1) Data frame handling\nI0826 16:43:24.464272    1729 log.go:172] (0xc0007d8320) (1) Data frame sent\nI0826 16:43:24.464280    1729 log.go:172] (0xc00003a4d0) (0xc0007d8320) Stream removed, broadcasting: 1\nI0826 16:43:24.464287    1729 log.go:172] (0xc00003a4d0) Go away received\nI0826 16:43:24.464525    1729 log.go:172] (0xc00003a4d0) (0xc0007d8320) Stream removed, broadcasting: 1\nI0826 16:43:24.464535    1729 log.go:172] (0xc00003a4d0) (0xc0007ec000) Stream removed, broadcasting: 3\nI0826 16:43:24.464540    1729 log.go:172] (0xc00003a4d0) (0xc0007d83c0) Stream removed, broadcasting: 5\n"
Aug 26 16:43:24.469: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 16:43:24.469: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 16:43:24.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:43:29.352: INFO: rc: 1
Aug 26 16:43:29.352: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Aug 26 16:43:39.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:43:42.606: INFO: rc: 1
Aug 26 16:43:42.606: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:43:52.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:43:53.329: INFO: rc: 1
Aug 26 16:43:53.329: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:44:03.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:44:03.420: INFO: rc: 1
Aug 26 16:44:03.420: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:44:13.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:44:13.663: INFO: rc: 1
Aug 26 16:44:13.663: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:44:23.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:44:23.759: INFO: rc: 1
Aug 26 16:44:23.759: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:44:33.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:44:33.860: INFO: rc: 1
Aug 26 16:44:33.860: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:44:43.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:44:43.953: INFO: rc: 1
Aug 26 16:44:43.953: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:44:53.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:44:55.248: INFO: rc: 1
Aug 26 16:44:55.248: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:45:05.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:45:05.347: INFO: rc: 1
Aug 26 16:45:05.347: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:45:15.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:45:15.447: INFO: rc: 1
Aug 26 16:45:15.447: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:45:25.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:45:25.542: INFO: rc: 1
Aug 26 16:45:25.542: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:45:35.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:45:35.765: INFO: rc: 1
Aug 26 16:45:35.765: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:45:45.765: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:45:45.858: INFO: rc: 1
Aug 26 16:45:45.858: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:45:55.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:45:55.962: INFO: rc: 1
Aug 26 16:45:55.962: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:46:05.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:46:06.068: INFO: rc: 1
Aug 26 16:46:06.068: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:46:16.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:46:16.165: INFO: rc: 1
Aug 26 16:46:16.165: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:46:26.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:46:26.269: INFO: rc: 1
Aug 26 16:46:26.269: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:46:36.269: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:46:36.364: INFO: rc: 1
Aug 26 16:46:36.364: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:46:46.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:46:46.516: INFO: rc: 1
Aug 26 16:46:46.517: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:46:56.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:46:56.627: INFO: rc: 1
Aug 26 16:46:56.627: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:47:06.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:47:06.723: INFO: rc: 1
Aug 26 16:47:06.723: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:47:16.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:47:16.835: INFO: rc: 1
Aug 26 16:47:16.835: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:47:26.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:47:26.939: INFO: rc: 1
Aug 26 16:47:26.940: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:47:36.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:47:37.045: INFO: rc: 1
Aug 26 16:47:37.045: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:47:47.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:47:47.153: INFO: rc: 1
Aug 26 16:47:47.153: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:47:57.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:47:57.243: INFO: rc: 1
Aug 26 16:47:57.243: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:48:07.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:48:07.419: INFO: rc: 1
Aug 26 16:48:07.419: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:48:17.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:48:17.661: INFO: rc: 1
Aug 26 16:48:17.662: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 26 16:48:27.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 16:48:27.772: INFO: rc: 1
Aug 26 16:48:27.772: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Aug 26 16:48:27.772: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 26 16:48:27.781: INFO: Deleting all statefulset in ns statefulset-3153
Aug 26 16:48:27.782: INFO: Scaling statefulset ss to 0
Aug 26 16:48:27.789: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 16:48:27.791: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:48:27.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3153" for this suite.

• [SLOW TEST:379.779 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":67,"skipped":1088,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:48:27.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
Aug 26 16:48:27.874: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config cluster-info'
Aug 26 16:48:27.973: INFO: stderr: ""
Aug 26 16:48:27.973: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:44383\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:44383/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:48:27.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3281" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":68,"skipped":1089,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:48:27.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 16:48:28.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Aug 26 16:48:28.946: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T16:48:28Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-26T16:48:28Z]] name:name1 resourceVersion:1098532 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:700465cb-8e58-458b-9e41-bea90e00a6e6] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Aug 26 16:48:38.952: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T16:48:38Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-26T16:48:38Z]] name:name2 resourceVersion:1098595 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:0e1e3203-afd5-4abd-a916-b8b300ec6c49] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Aug 26 16:48:48.958: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T16:48:28Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-26T16:48:48Z]] name:name1 resourceVersion:1098623 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:700465cb-8e58-458b-9e41-bea90e00a6e6] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Aug 26 16:48:59.000: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T16:48:38Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-26T16:48:58Z]] name:name2 resourceVersion:1098653 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:0e1e3203-afd5-4abd-a916-b8b300ec6c49] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Aug 26 16:49:09.007: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T16:48:28Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-26T16:48:48Z]] name:name1 resourceVersion:1098681 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:700465cb-8e58-458b-9e41-bea90e00a6e6] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Aug 26 16:49:19.015: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-26T16:48:38Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-26T16:48:58Z]] name:name2 resourceVersion:1098711 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:0e1e3203-afd5-4abd-a916-b8b300ec6c49] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:49:29.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-2758" for this suite.

• [SLOW TEST:63.164 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":69,"skipped":1104,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:49:31.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-cb6cca6a-efe8-485a-b91b-7cbdca147f8a
STEP: Creating a pod to test consume configMaps
Aug 26 16:49:32.786: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e5b15599-80f2-4f70-b404-0c66b02ca764" in namespace "projected-4807" to be "Succeeded or Failed"
Aug 26 16:49:32.789: INFO: Pod "pod-projected-configmaps-e5b15599-80f2-4f70-b404-0c66b02ca764": Phase="Pending", Reason="", readiness=false. Elapsed: 2.98937ms
Aug 26 16:49:34.792: INFO: Pod "pod-projected-configmaps-e5b15599-80f2-4f70-b404-0c66b02ca764": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006510727s
Aug 26 16:49:37.281: INFO: Pod "pod-projected-configmaps-e5b15599-80f2-4f70-b404-0c66b02ca764": Phase="Pending", Reason="", readiness=false. Elapsed: 4.495221366s
Aug 26 16:49:39.285: INFO: Pod "pod-projected-configmaps-e5b15599-80f2-4f70-b404-0c66b02ca764": Phase="Pending", Reason="", readiness=false. Elapsed: 6.499492444s
Aug 26 16:49:41.305: INFO: Pod "pod-projected-configmaps-e5b15599-80f2-4f70-b404-0c66b02ca764": Phase="Pending", Reason="", readiness=false. Elapsed: 8.519338764s
Aug 26 16:49:43.498: INFO: Pod "pod-projected-configmaps-e5b15599-80f2-4f70-b404-0c66b02ca764": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.712682952s
STEP: Saw pod success
Aug 26 16:49:43.499: INFO: Pod "pod-projected-configmaps-e5b15599-80f2-4f70-b404-0c66b02ca764" satisfied condition "Succeeded or Failed"
Aug 26 16:49:43.502: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-e5b15599-80f2-4f70-b404-0c66b02ca764 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 16:49:44.170: INFO: Waiting for pod pod-projected-configmaps-e5b15599-80f2-4f70-b404-0c66b02ca764 to disappear
Aug 26 16:49:44.251: INFO: Pod pod-projected-configmaps-e5b15599-80f2-4f70-b404-0c66b02ca764 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:49:44.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4807" for this suite.

• [SLOW TEST:13.427 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1134,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:49:44.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-cd2cd491-653d-43a3-a4d3-a13f6e22711c
STEP: Creating a pod to test consume secrets
Aug 26 16:49:45.360: INFO: Waiting up to 5m0s for pod "pod-secrets-5bb4db97-f50c-4a09-867e-551831bd6f3b" in namespace "secrets-2898" to be "Succeeded or Failed"
Aug 26 16:49:45.516: INFO: Pod "pod-secrets-5bb4db97-f50c-4a09-867e-551831bd6f3b": Phase="Pending", Reason="", readiness=false. Elapsed: 156.576772ms
Aug 26 16:49:47.520: INFO: Pod "pod-secrets-5bb4db97-f50c-4a09-867e-551831bd6f3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160745573s
Aug 26 16:49:49.524: INFO: Pod "pod-secrets-5bb4db97-f50c-4a09-867e-551831bd6f3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164834597s
Aug 26 16:49:51.528: INFO: Pod "pod-secrets-5bb4db97-f50c-4a09-867e-551831bd6f3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.167949514s
STEP: Saw pod success
Aug 26 16:49:51.528: INFO: Pod "pod-secrets-5bb4db97-f50c-4a09-867e-551831bd6f3b" satisfied condition "Succeeded or Failed"
Aug 26 16:49:51.530: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-5bb4db97-f50c-4a09-867e-551831bd6f3b container secret-volume-test: 
STEP: delete the pod
Aug 26 16:49:51.733: INFO: Waiting for pod pod-secrets-5bb4db97-f50c-4a09-867e-551831bd6f3b to disappear
Aug 26 16:49:51.784: INFO: Pod pod-secrets-5bb4db97-f50c-4a09-867e-551831bd6f3b no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:49:51.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2898" for this suite.

• [SLOW TEST:7.221 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1142,"failed":0}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:49:51.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:50:39.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3615" for this suite.

• [SLOW TEST:47.709 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1145,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:50:39.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:50:43.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2377" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":73,"skipped":1159,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:50:43.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 26 16:50:44.365: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8824 /api/v1/namespaces/watch-8824/configmaps/e2e-watch-test-label-changed cd73a818-a761-407f-a7d3-06cd40b991de 1099075 0 2020-08-26 16:50:44 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-26 16:50:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 26 16:50:44.365: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8824 /api/v1/namespaces/watch-8824/configmaps/e2e-watch-test-label-changed cd73a818-a761-407f-a7d3-06cd40b991de 1099077 0 2020-08-26 16:50:44 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-26 16:50:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 26 16:50:44.365: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8824 /api/v1/namespaces/watch-8824/configmaps/e2e-watch-test-label-changed cd73a818-a761-407f-a7d3-06cd40b991de 1099079 0 2020-08-26 16:50:44 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-26 16:50:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 26 16:50:54.438: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8824 /api/v1/namespaces/watch-8824/configmaps/e2e-watch-test-label-changed cd73a818-a761-407f-a7d3-06cd40b991de 1099136 0 2020-08-26 16:50:44 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-26 16:50:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 26 16:50:54.438: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8824 /api/v1/namespaces/watch-8824/configmaps/e2e-watch-test-label-changed cd73a818-a761-407f-a7d3-06cd40b991de 1099137 0 2020-08-26 16:50:44 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-26 16:50:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 26 16:50:54.438: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8824 /api/v1/namespaces/watch-8824/configmaps/e2e-watch-test-label-changed cd73a818-a761-407f-a7d3-06cd40b991de 1099138 0 2020-08-26 16:50:44 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-08-26 16:50:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:50:54.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8824" for this suite.

• [SLOW TEST:10.585 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":74,"skipped":1169,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:50:54.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-97
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 26 16:50:54.571: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 26 16:50:54.726: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 16:50:57.102: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 16:50:58.866: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 16:51:00.863: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 16:51:03.456: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 16:51:04.730: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 16:51:06.730: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 16:51:09.368: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 16:51:10.730: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 26 16:51:10.741: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 26 16:51:12.887: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 26 16:51:14.745: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 26 16:51:16.745: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 26 16:51:18.746: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 26 16:51:25.079: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.254:8080/dial?request=hostname&protocol=http&host=10.244.1.191&port=8080&tries=1'] Namespace:pod-network-test-97 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 16:51:25.079: INFO: >>> kubeConfig: /root/.kube/config
I0826 16:51:25.116007       7 log.go:172] (0xc0028bbef0) (0xc001841220) Create stream
I0826 16:51:25.116040       7 log.go:172] (0xc0028bbef0) (0xc001841220) Stream added, broadcasting: 1
I0826 16:51:25.120117       7 log.go:172] (0xc0028bbef0) Reply frame received for 1
I0826 16:51:25.120166       7 log.go:172] (0xc0028bbef0) (0xc002bd1220) Create stream
I0826 16:51:25.120184       7 log.go:172] (0xc0028bbef0) (0xc002bd1220) Stream added, broadcasting: 3
I0826 16:51:25.121513       7 log.go:172] (0xc0028bbef0) Reply frame received for 3
I0826 16:51:25.121576       7 log.go:172] (0xc0028bbef0) (0xc0018f01e0) Create stream
I0826 16:51:25.121600       7 log.go:172] (0xc0028bbef0) (0xc0018f01e0) Stream added, broadcasting: 5
I0826 16:51:25.122693       7 log.go:172] (0xc0028bbef0) Reply frame received for 5
I0826 16:51:25.190514       7 log.go:172] (0xc0028bbef0) Data frame received for 3
I0826 16:51:25.190549       7 log.go:172] (0xc002bd1220) (3) Data frame handling
I0826 16:51:25.190581       7 log.go:172] (0xc002bd1220) (3) Data frame sent
I0826 16:51:25.190958       7 log.go:172] (0xc0028bbef0) Data frame received for 5
I0826 16:51:25.190979       7 log.go:172] (0xc0018f01e0) (5) Data frame handling
I0826 16:51:25.191196       7 log.go:172] (0xc0028bbef0) Data frame received for 3
I0826 16:51:25.191208       7 log.go:172] (0xc002bd1220) (3) Data frame handling
I0826 16:51:25.192872       7 log.go:172] (0xc0028bbef0) Data frame received for 1
I0826 16:51:25.192892       7 log.go:172] (0xc001841220) (1) Data frame handling
I0826 16:51:25.192902       7 log.go:172] (0xc001841220) (1) Data frame sent
I0826 16:51:25.192917       7 log.go:172] (0xc0028bbef0) (0xc001841220) Stream removed, broadcasting: 1
I0826 16:51:25.192941       7 log.go:172] (0xc0028bbef0) Go away received
I0826 16:51:25.193147       7 log.go:172] (0xc0028bbef0) (0xc001841220) Stream removed, broadcasting: 1
I0826 16:51:25.193167       7 log.go:172] (0xc0028bbef0) (0xc002bd1220) Stream removed, broadcasting: 3
I0826 16:51:25.193185       7 log.go:172] (0xc0028bbef0) (0xc0018f01e0) Stream removed, broadcasting: 5
Aug 26 16:51:25.193: INFO: Waiting for responses: map[]
Aug 26 16:51:25.195: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.254:8080/dial?request=hostname&protocol=http&host=10.244.2.253&port=8080&tries=1'] Namespace:pod-network-test-97 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 16:51:25.195: INFO: >>> kubeConfig: /root/.kube/config
I0826 16:51:25.219367       7 log.go:172] (0xc001ca2580) (0xc0018f0b40) Create stream
I0826 16:51:25.219407       7 log.go:172] (0xc001ca2580) (0xc0018f0b40) Stream added, broadcasting: 1
I0826 16:51:25.221300       7 log.go:172] (0xc001ca2580) Reply frame received for 1
I0826 16:51:25.221340       7 log.go:172] (0xc001ca2580) (0xc0018415e0) Create stream
I0826 16:51:25.221354       7 log.go:172] (0xc001ca2580) (0xc0018415e0) Stream added, broadcasting: 3
I0826 16:51:25.222277       7 log.go:172] (0xc001ca2580) Reply frame received for 3
I0826 16:51:25.222327       7 log.go:172] (0xc001ca2580) (0xc002bd1400) Create stream
I0826 16:51:25.222360       7 log.go:172] (0xc001ca2580) (0xc002bd1400) Stream added, broadcasting: 5
I0826 16:51:25.223221       7 log.go:172] (0xc001ca2580) Reply frame received for 5
I0826 16:51:25.287032       7 log.go:172] (0xc001ca2580) Data frame received for 3
I0826 16:51:25.287063       7 log.go:172] (0xc0018415e0) (3) Data frame handling
I0826 16:51:25.287081       7 log.go:172] (0xc0018415e0) (3) Data frame sent
I0826 16:51:25.287352       7 log.go:172] (0xc001ca2580) Data frame received for 5
I0826 16:51:25.287397       7 log.go:172] (0xc002bd1400) (5) Data frame handling
I0826 16:51:25.287447       7 log.go:172] (0xc001ca2580) Data frame received for 3
I0826 16:51:25.287470       7 log.go:172] (0xc0018415e0) (3) Data frame handling
I0826 16:51:25.289213       7 log.go:172] (0xc001ca2580) Data frame received for 1
I0826 16:51:25.289233       7 log.go:172] (0xc0018f0b40) (1) Data frame handling
I0826 16:51:25.289253       7 log.go:172] (0xc0018f0b40) (1) Data frame sent
I0826 16:51:25.289383       7 log.go:172] (0xc001ca2580) (0xc0018f0b40) Stream removed, broadcasting: 1
I0826 16:51:25.289418       7 log.go:172] (0xc001ca2580) Go away received
I0826 16:51:25.289600       7 log.go:172] (0xc001ca2580) (0xc0018f0b40) Stream removed, broadcasting: 1
I0826 16:51:25.289626       7 log.go:172] (0xc001ca2580) (0xc0018415e0) Stream removed, broadcasting: 3
I0826 16:51:25.289637       7 log.go:172] (0xc001ca2580) (0xc002bd1400) Stream removed, broadcasting: 5
Aug 26 16:51:25.289: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:51:25.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-97" for this suite.

• [SLOW TEST:30.837 seconds]
[sig-network] Networking
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1178,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:51:25.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-5609f9b4-0b6e-4cf0-8f28-748ca02757b1
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:51:35.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6963" for this suite.

• [SLOW TEST:10.287 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1181,"failed":0}
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:51:35.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test env composition
Aug 26 16:51:35.710: INFO: Waiting up to 5m0s for pod "var-expansion-a77bb2e8-1c3f-45fb-9574-00b53f73972e" in namespace "var-expansion-2134" to be "Succeeded or Failed"
Aug 26 16:51:35.752: INFO: Pod "var-expansion-a77bb2e8-1c3f-45fb-9574-00b53f73972e": Phase="Pending", Reason="", readiness=false. Elapsed: 41.277849ms
Aug 26 16:51:37.755: INFO: Pod "var-expansion-a77bb2e8-1c3f-45fb-9574-00b53f73972e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044911155s
Aug 26 16:51:39.791: INFO: Pod "var-expansion-a77bb2e8-1c3f-45fb-9574-00b53f73972e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080554502s
Aug 26 16:51:42.139: INFO: Pod "var-expansion-a77bb2e8-1c3f-45fb-9574-00b53f73972e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.428797356s
STEP: Saw pod success
Aug 26 16:51:42.139: INFO: Pod "var-expansion-a77bb2e8-1c3f-45fb-9574-00b53f73972e" satisfied condition "Succeeded or Failed"
Aug 26 16:51:42.361: INFO: Trying to get logs from node kali-worker2 pod var-expansion-a77bb2e8-1c3f-45fb-9574-00b53f73972e container dapi-container: 
STEP: delete the pod
Aug 26 16:51:42.960: INFO: Waiting for pod var-expansion-a77bb2e8-1c3f-45fb-9574-00b53f73972e to disappear
Aug 26 16:51:43.510: INFO: Pod var-expansion-a77bb2e8-1c3f-45fb-9574-00b53f73972e no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:51:43.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2134" for this suite.

• [SLOW TEST:7.931 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1187,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:51:43.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 26 16:51:44.850: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0644d82f-acbb-4e93-a32f-4fbf5770561a" in namespace "downward-api-2148" to be "Succeeded or Failed"
Aug 26 16:51:44.943: INFO: Pod "downwardapi-volume-0644d82f-acbb-4e93-a32f-4fbf5770561a": Phase="Pending", Reason="", readiness=false. Elapsed: 93.207319ms
Aug 26 16:51:46.948: INFO: Pod "downwardapi-volume-0644d82f-acbb-4e93-a32f-4fbf5770561a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097945666s
Aug 26 16:51:49.229: INFO: Pod "downwardapi-volume-0644d82f-acbb-4e93-a32f-4fbf5770561a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.379645885s
Aug 26 16:51:51.445: INFO: Pod "downwardapi-volume-0644d82f-acbb-4e93-a32f-4fbf5770561a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.595583791s
STEP: Saw pod success
Aug 26 16:51:51.445: INFO: Pod "downwardapi-volume-0644d82f-acbb-4e93-a32f-4fbf5770561a" satisfied condition "Succeeded or Failed"
Aug 26 16:51:51.464: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-0644d82f-acbb-4e93-a32f-4fbf5770561a container client-container: 
STEP: delete the pod
Aug 26 16:51:51.744: INFO: Waiting for pod downwardapi-volume-0644d82f-acbb-4e93-a32f-4fbf5770561a to disappear
Aug 26 16:51:52.306: INFO: Pod downwardapi-volume-0644d82f-acbb-4e93-a32f-4fbf5770561a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:51:52.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2148" for this suite.

• [SLOW TEST:8.833 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":78,"skipped":1202,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:51:52.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
Aug 26 16:51:52.755: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Aug 26 16:51:52.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9612'
Aug 26 16:51:53.963: INFO: stderr: ""
Aug 26 16:51:53.963: INFO: stdout: "service/agnhost-slave created\n"
Aug 26 16:51:53.963: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Aug 26 16:51:53.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9612'
Aug 26 16:51:55.319: INFO: stderr: ""
Aug 26 16:51:55.319: INFO: stdout: "service/agnhost-master created\n"
Aug 26 16:51:55.319: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 26 16:51:55.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9612'
Aug 26 16:51:56.784: INFO: stderr: ""
Aug 26 16:51:56.784: INFO: stdout: "service/frontend created\n"
Aug 26 16:51:56.784: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Aug 26 16:51:56.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9612'
Aug 26 16:51:57.647: INFO: stderr: ""
Aug 26 16:51:57.647: INFO: stdout: "deployment.apps/frontend created\n"
Aug 26 16:51:57.647: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 26 16:51:57.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9612'
Aug 26 16:51:58.296: INFO: stderr: ""
Aug 26 16:51:58.296: INFO: stdout: "deployment.apps/agnhost-master created\n"
Aug 26 16:51:58.296: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 26 16:51:58.296: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9612'
Aug 26 16:51:59.961: INFO: stderr: ""
Aug 26 16:51:59.961: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Aug 26 16:51:59.961: INFO: Waiting for all frontend pods to be Running.
Aug 26 16:52:15.012: INFO: Waiting for frontend to serve content.
Aug 26 16:52:15.022: INFO: Trying to add a new entry to the guestbook.
Aug 26 16:52:15.032: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 26 16:52:15.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9612'
Aug 26 16:52:15.190: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 16:52:15.190: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 26 16:52:15.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9612'
Aug 26 16:52:15.470: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 16:52:15.470: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 26 16:52:15.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9612'
Aug 26 16:52:15.580: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 16:52:15.581: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 26 16:52:15.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9612'
Aug 26 16:52:15.682: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 16:52:15.682: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 26 16:52:15.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9612'
Aug 26 16:52:15.820: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 16:52:15.820: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 26 16:52:15.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9612'
Aug 26 16:52:16.388: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 16:52:16.388: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:52:16.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9612" for this suite.

• [SLOW TEST:24.509 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":79,"skipped":1211,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:52:16.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7416.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7416.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7416.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7416.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7416.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7416.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 16:52:25.762: INFO: DNS probes using dns-7416/dns-test-84b5a5a1-6ac5-414d-960a-22b9fee66e44 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:52:25.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7416" for this suite.

• [SLOW TEST:9.025 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":80,"skipped":1216,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:52:25.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's command
Aug 26 16:52:25.996: INFO: Waiting up to 5m0s for pod "var-expansion-09e24ed7-c0a2-46d4-b032-2fa663d9ebc8" in namespace "var-expansion-4673" to be "Succeeded or Failed"
Aug 26 16:52:26.354: INFO: Pod "var-expansion-09e24ed7-c0a2-46d4-b032-2fa663d9ebc8": Phase="Pending", Reason="", readiness=false. Elapsed: 358.193357ms
Aug 26 16:52:28.564: INFO: Pod "var-expansion-09e24ed7-c0a2-46d4-b032-2fa663d9ebc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.568116461s
Aug 26 16:52:30.719: INFO: Pod "var-expansion-09e24ed7-c0a2-46d4-b032-2fa663d9ebc8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.723372192s
Aug 26 16:52:33.065: INFO: Pod "var-expansion-09e24ed7-c0a2-46d4-b032-2fa663d9ebc8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.069558898s
Aug 26 16:52:35.122: INFO: Pod "var-expansion-09e24ed7-c0a2-46d4-b032-2fa663d9ebc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.125957632s
STEP: Saw pod success
Aug 26 16:52:35.122: INFO: Pod "var-expansion-09e24ed7-c0a2-46d4-b032-2fa663d9ebc8" satisfied condition "Succeeded or Failed"
Aug 26 16:52:35.125: INFO: Trying to get logs from node kali-worker pod var-expansion-09e24ed7-c0a2-46d4-b032-2fa663d9ebc8 container dapi-container: 
STEP: delete the pod
Aug 26 16:52:35.218: INFO: Waiting for pod var-expansion-09e24ed7-c0a2-46d4-b032-2fa663d9ebc8 to disappear
Aug 26 16:52:35.390: INFO: Pod var-expansion-09e24ed7-c0a2-46d4-b032-2fa663d9ebc8 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:52:35.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4673" for this suite.

• [SLOW TEST:9.512 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1254,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:52:35.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 26 16:52:35.618: INFO: Waiting up to 5m0s for pod "pod-793ec786-a38c-4c09-a856-7f7ab33a5bb3" in namespace "emptydir-2079" to be "Succeeded or Failed"
Aug 26 16:52:36.049: INFO: Pod "pod-793ec786-a38c-4c09-a856-7f7ab33a5bb3": Phase="Pending", Reason="", readiness=false. Elapsed: 430.659434ms
Aug 26 16:52:38.594: INFO: Pod "pod-793ec786-a38c-4c09-a856-7f7ab33a5bb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.975651271s
Aug 26 16:52:40.846: INFO: Pod "pod-793ec786-a38c-4c09-a856-7f7ab33a5bb3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.227358451s
Aug 26 16:52:43.397: INFO: Pod "pod-793ec786-a38c-4c09-a856-7f7ab33a5bb3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.778222031s
Aug 26 16:52:45.399: INFO: Pod "pod-793ec786-a38c-4c09-a856-7f7ab33a5bb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.780907114s
STEP: Saw pod success
Aug 26 16:52:45.399: INFO: Pod "pod-793ec786-a38c-4c09-a856-7f7ab33a5bb3" satisfied condition "Succeeded or Failed"
Aug 26 16:52:45.401: INFO: Trying to get logs from node kali-worker2 pod pod-793ec786-a38c-4c09-a856-7f7ab33a5bb3 container test-container: 
STEP: delete the pod
Aug 26 16:52:46.055: INFO: Waiting for pod pod-793ec786-a38c-4c09-a856-7f7ab33a5bb3 to disappear
Aug 26 16:52:46.083: INFO: Pod pod-793ec786-a38c-4c09-a856-7f7ab33a5bb3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:52:46.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2079" for this suite.

• [SLOW TEST:10.692 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1259,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:52:46.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 26 16:52:47.573: INFO: Waiting up to 5m0s for pod "pod-5dd9f6e4-c0dd-4624-9dbc-f12ba70662c3" in namespace "emptydir-1919" to be "Succeeded or Failed"
Aug 26 16:52:47.641: INFO: Pod "pod-5dd9f6e4-c0dd-4624-9dbc-f12ba70662c3": Phase="Pending", Reason="", readiness=false. Elapsed: 67.379989ms
Aug 26 16:52:49.743: INFO: Pod "pod-5dd9f6e4-c0dd-4624-9dbc-f12ba70662c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169374964s
Aug 26 16:52:52.966: INFO: Pod "pod-5dd9f6e4-c0dd-4624-9dbc-f12ba70662c3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.392674985s
Aug 26 16:52:55.003: INFO: Pod "pod-5dd9f6e4-c0dd-4624-9dbc-f12ba70662c3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.42993737s
Aug 26 16:52:57.072: INFO: Pod "pod-5dd9f6e4-c0dd-4624-9dbc-f12ba70662c3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.498980017s
Aug 26 16:52:59.076: INFO: Pod "pod-5dd9f6e4-c0dd-4624-9dbc-f12ba70662c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.502972329s
STEP: Saw pod success
Aug 26 16:52:59.076: INFO: Pod "pod-5dd9f6e4-c0dd-4624-9dbc-f12ba70662c3" satisfied condition "Succeeded or Failed"
Aug 26 16:52:59.080: INFO: Trying to get logs from node kali-worker pod pod-5dd9f6e4-c0dd-4624-9dbc-f12ba70662c3 container test-container: 
STEP: delete the pod
Aug 26 16:52:59.943: INFO: Waiting for pod pod-5dd9f6e4-c0dd-4624-9dbc-f12ba70662c3 to disappear
Aug 26 16:53:00.006: INFO: Pod pod-5dd9f6e4-c0dd-4624-9dbc-f12ba70662c3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:53:00.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1919" for this suite.

• [SLOW TEST:14.234 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1265,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:53:00.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-9ed0b452-3a68-4c8d-a110-a0435e1f25ec
STEP: Creating a pod to test consume configMaps
Aug 26 16:53:01.741: INFO: Waiting up to 5m0s for pod "pod-configmaps-abba9b2e-b642-442f-b0c7-6b352fe5801f" in namespace "configmap-6122" to be "Succeeded or Failed"
Aug 26 16:53:02.079: INFO: Pod "pod-configmaps-abba9b2e-b642-442f-b0c7-6b352fe5801f": Phase="Pending", Reason="", readiness=false. Elapsed: 337.327141ms
Aug 26 16:53:04.259: INFO: Pod "pod-configmaps-abba9b2e-b642-442f-b0c7-6b352fe5801f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.518287356s
Aug 26 16:53:06.571: INFO: Pod "pod-configmaps-abba9b2e-b642-442f-b0c7-6b352fe5801f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.829705151s
Aug 26 16:53:08.923: INFO: Pod "pod-configmaps-abba9b2e-b642-442f-b0c7-6b352fe5801f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.181867732s
Aug 26 16:53:10.927: INFO: Pod "pod-configmaps-abba9b2e-b642-442f-b0c7-6b352fe5801f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.185711519s
Aug 26 16:53:13.151: INFO: Pod "pod-configmaps-abba9b2e-b642-442f-b0c7-6b352fe5801f": Phase="Running", Reason="", readiness=true. Elapsed: 11.409615829s
Aug 26 16:53:15.678: INFO: Pod "pod-configmaps-abba9b2e-b642-442f-b0c7-6b352fe5801f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.937290428s
STEP: Saw pod success
Aug 26 16:53:15.679: INFO: Pod "pod-configmaps-abba9b2e-b642-442f-b0c7-6b352fe5801f" satisfied condition "Succeeded or Failed"
Aug 26 16:53:15.681: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-abba9b2e-b642-442f-b0c7-6b352fe5801f container configmap-volume-test: 
STEP: delete the pod
Aug 26 16:53:17.371: INFO: Waiting for pod pod-configmaps-abba9b2e-b642-442f-b0c7-6b352fe5801f to disappear
Aug 26 16:53:17.533: INFO: Pod pod-configmaps-abba9b2e-b642-442f-b0c7-6b352fe5801f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:53:17.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6122" for this suite.

• [SLOW TEST:17.213 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":84,"skipped":1284,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:53:17.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 26 16:53:19.098: INFO: Waiting up to 5m0s for pod "downward-api-f9274256-ddc5-4c2b-a905-f1d9315b27bb" in namespace "downward-api-1523" to be "Succeeded or Failed"
Aug 26 16:53:19.277: INFO: Pod "downward-api-f9274256-ddc5-4c2b-a905-f1d9315b27bb": Phase="Pending", Reason="", readiness=false. Elapsed: 179.382254ms
Aug 26 16:53:21.439: INFO: Pod "downward-api-f9274256-ddc5-4c2b-a905-f1d9315b27bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.340975473s
Aug 26 16:53:23.802: INFO: Pod "downward-api-f9274256-ddc5-4c2b-a905-f1d9315b27bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.703897693s
Aug 26 16:53:26.032: INFO: Pod "downward-api-f9274256-ddc5-4c2b-a905-f1d9315b27bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.933700404s
Aug 26 16:53:28.229: INFO: Pod "downward-api-f9274256-ddc5-4c2b-a905-f1d9315b27bb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.130858863s
Aug 26 16:53:30.362: INFO: Pod "downward-api-f9274256-ddc5-4c2b-a905-f1d9315b27bb": Phase="Running", Reason="", readiness=true. Elapsed: 11.264197986s
Aug 26 16:53:32.823: INFO: Pod "downward-api-f9274256-ddc5-4c2b-a905-f1d9315b27bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.725103423s
STEP: Saw pod success
Aug 26 16:53:32.823: INFO: Pod "downward-api-f9274256-ddc5-4c2b-a905-f1d9315b27bb" satisfied condition "Succeeded or Failed"
Aug 26 16:53:32.825: INFO: Trying to get logs from node kali-worker2 pod downward-api-f9274256-ddc5-4c2b-a905-f1d9315b27bb container dapi-container: 
STEP: delete the pod
Aug 26 16:53:34.154: INFO: Waiting for pod downward-api-f9274256-ddc5-4c2b-a905-f1d9315b27bb to disappear
Aug 26 16:53:34.211: INFO: Pod downward-api-f9274256-ddc5-4c2b-a905-f1d9315b27bb no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:53:34.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1523" for this suite.

• [SLOW TEST:16.868 seconds]
[sig-node] Downward API
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1313,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:53:34.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-bcfa6c64-5634-4293-b9f6-476595b7136b
STEP: Creating secret with name s-test-opt-upd-0c72d697-fda3-4b61-bd87-9863148df152
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-bcfa6c64-5634-4293-b9f6-476595b7136b
STEP: Updating secret s-test-opt-upd-0c72d697-fda3-4b61-bd87-9863148df152
STEP: Creating secret with name s-test-opt-create-4c4ad168-2fc5-4d55-8c6e-13cdafc70a51
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:55:17.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3894" for this suite.

• [SLOW TEST:102.944 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":86,"skipped":1348,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:55:17.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
Aug 26 16:55:18.051: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
Aug 26 16:55:18.427: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Aug 26 16:55:18.427: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
Aug 26 16:55:18.643: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Aug 26 16:55:18.643: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
Aug 26 16:55:19.205: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
Aug 26 16:55:19.205: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
Aug 26 16:55:28.051: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:55:29.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-4597" for this suite.

• [SLOW TEST:12.528 seconds]
[sig-scheduling] LimitRange
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":87,"skipped":1377,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:55:29.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 16:55:31.426: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 26 16:55:36.658: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 26 16:55:49.362: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 26 16:55:51.509: INFO: Creating deployment "test-rollover-deployment"
Aug 26 16:55:52.206: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 26 16:55:56.782: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 26 16:55:57.203: INFO: Ensure that both replica sets have 1 created replica
Aug 26 16:55:58.378: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 26 16:55:58.596: INFO: Updating deployment test-rollover-deployment
Aug 26 16:55:58.596: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 26 16:56:01.457: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 26 16:56:01.661: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 26 16:56:02.435: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 16:56:02.435: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:3, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057760, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057752, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:56:04.556: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 16:56:04.556: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057761, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057752, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:56:06.598: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 16:56:06.598: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057761, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057752, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:56:08.496: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 16:56:08.496: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057761, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057752, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:56:10.680: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 16:56:10.680: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057769, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057752, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:56:12.441: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 16:56:12.441: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057769, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057752, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:56:14.672: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 16:56:14.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057769, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057752, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:56:16.443: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 16:56:16.443: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057769, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057752, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:56:18.491: INFO: all replica sets need to contain the pod-template-hash label
Aug 26 16:56:18.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057753, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057769, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057752, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:56:20.440: INFO: 
Aug 26 16:56:20.440: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 26 16:56:20.447: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-7557 /apis/apps/v1/namespaces/deployment-7557/deployments/test-rollover-deployment e12da1ed-b5ec-4598-b52a-d7fe1f72223b 1100616 2 2020-08-26 16:55:51 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-08-26 16:55:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-26 16:56:20 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0008b0cd8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-26 16:55:53 +0000 UTC,LastTransitionTime:2020-08-26 16:55:53 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-08-26 16:56:20 +0000 UTC,LastTransitionTime:2020-08-26 16:55:52 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 26 16:56:20.449: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-7557 /apis/apps/v1/namespaces/deployment-7557/replicasets/test-rollover-deployment-84f7f6f64b 18a2eaf5-7830-4ecc-a2e6-10441390a53a 1100602 2 2020-08-26 16:55:58 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment e12da1ed-b5ec-4598-b52a-d7fe1f72223b 0xc0008b16e7 0xc0008b16e8}] []  [{kube-controller-manager Update apps/v1 2020-08-26 16:56:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 49 50 100 97 49 101 100 45 98 53 101 99 45 52 53 57 56 45 98 53 50 97 45 100 55 102 101 49 102 55 50 50 50 51 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0008b1798  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 26 16:56:20.449: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 26 16:56:20.449: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-7557 /apis/apps/v1/namespaces/deployment-7557/replicasets/test-rollover-controller befe843e-a0c6-48ca-9fcc-d5f8d3e12e00 1100615 2 2020-08-26 16:55:30 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment e12da1ed-b5ec-4598-b52a-d7fe1f72223b 0xc0008b12ff 0xc0008b1330}] []  [{e2e.test Update apps/v1 2020-08-26 16:55:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-26 16:56:20 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 49 50 100 97 49 101 100 45 98 53 101 99 45 52 53 57 56 45 98 53 50 97 45 100 55 102 101 49 102 55 50 50 50 51 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0008b14a8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 26 16:56:20.450: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-7557 /apis/apps/v1/namespaces/deployment-7557/replicasets/test-rollover-deployment-5686c4cfd5 a2289a63-0ce6-4ddc-9d96-3e8ccdb940e5 1100543 2 2020-08-26 16:55:52 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment e12da1ed-b5ec-4598-b52a-d7fe1f72223b 0xc0008b15b7 0xc0008b15b8}] []  [{kube-controller-manager Update apps/v1 2020-08-26 16:56:01 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 49 50 100 97 49 101 100 45 98 53 101 99 45 52 53 57 56 45 98 53 50 97 45 100 55 102 101 49 102 55 50 50 50 51 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0008b1678  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 26 16:56:20.452: INFO: Pod "test-rollover-deployment-84f7f6f64b-gr9xt" is available:
&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-gr9xt test-rollover-deployment-84f7f6f64b- deployment-7557 /api/v1/namespaces/deployment-7557/pods/test-rollover-deployment-84f7f6f64b-gr9xt 76d3974f-ab33-449d-9cad-550999e4b0b8 1100573 0 2020-08-26 16:55:59 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b 18a2eaf5-7830-4ecc-a2e6-10441390a53a 0xc002c3bcf7 0xc002c3bcf8}] []  [{kube-controller-manager Update v1 2020-08-26 16:55:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 56 97 50 101 97 102 53 45 55 56 51 48 45 52 101 99 99 45 97 50 101 54 45 49 48 52 52 49 51 57 48 97 53 51 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 16:56:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r9rnt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r9rnt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r9rnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 16:56:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 16:56:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 16:56:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 16:55:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.14,StartTime:2020-08-26 16:56:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 16:56:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://c88fb99adb2f9cddf05b577337e81986f892e09e1e1fd7078d955d769cc3d281,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:56:20.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7557" for this suite.

• [SLOW TEST:50.577 seconds]
[sig-apps] Deployment
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":88,"skipped":1428,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:56:20.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 26 16:56:20.773: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f5ed0bb2-12da-40ff-a592-7121b8e22a26" in namespace "projected-7932" to be "Succeeded or Failed"
Aug 26 16:56:20.871: INFO: Pod "downwardapi-volume-f5ed0bb2-12da-40ff-a592-7121b8e22a26": Phase="Pending", Reason="", readiness=false. Elapsed: 97.897691ms
Aug 26 16:56:22.876: INFO: Pod "downwardapi-volume-f5ed0bb2-12da-40ff-a592-7121b8e22a26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102719247s
Aug 26 16:56:24.879: INFO: Pod "downwardapi-volume-f5ed0bb2-12da-40ff-a592-7121b8e22a26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106103017s
Aug 26 16:56:27.070: INFO: Pod "downwardapi-volume-f5ed0bb2-12da-40ff-a592-7121b8e22a26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.297202669s
STEP: Saw pod success
Aug 26 16:56:27.070: INFO: Pod "downwardapi-volume-f5ed0bb2-12da-40ff-a592-7121b8e22a26" satisfied condition "Succeeded or Failed"
Aug 26 16:56:27.073: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-f5ed0bb2-12da-40ff-a592-7121b8e22a26 container client-container: 
STEP: delete the pod
Aug 26 16:56:27.435: INFO: Waiting for pod downwardapi-volume-f5ed0bb2-12da-40ff-a592-7121b8e22a26 to disappear
Aug 26 16:56:27.566: INFO: Pod downwardapi-volume-f5ed0bb2-12da-40ff-a592-7121b8e22a26 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:56:27.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7932" for this suite.

• [SLOW TEST:7.132 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1443,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:56:27.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 16:56:30.345: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 16:56:32.371: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057790, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057790, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057790, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057790, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:56:34.379: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057790, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057790, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057790, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057790, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 16:56:36.376: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057790, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057790, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057790, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734057790, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 16:56:39.411: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Aug 26 16:56:45.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config attach --namespace=webhook-7745 to-be-attached-pod -i -c=container1'
Aug 26 16:56:49.463: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:56:49.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7745" for this suite.
STEP: Destroying namespace "webhook-7745-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:22.029 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":90,"skipped":1510,"failed":0}
SSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:56:49.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 16:56:50.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-1835
I0826 16:56:50.341805       7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1835, replica count: 1
I0826 16:56:51.392211       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:56:52.392418       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:56:53.392621       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:56:54.392916       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:56:55.393072       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:56:56.393208       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:56:57.393375       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 26 16:56:57.888: INFO: Created: latency-svc-7rtn2
Aug 26 16:56:57.895: INFO: Got endpoints: latency-svc-7rtn2 [402.28431ms]
Aug 26 16:56:58.871: INFO: Created: latency-svc-8jvrg
Aug 26 16:56:59.730: INFO: Got endpoints: latency-svc-8jvrg [1.834706351s]
Aug 26 16:57:00.667: INFO: Created: latency-svc-ltscw
Aug 26 16:57:00.702: INFO: Got endpoints: latency-svc-ltscw [2.806068757s]
Aug 26 16:57:00.877: INFO: Created: latency-svc-nfp5c
Aug 26 16:57:00.928: INFO: Got endpoints: latency-svc-nfp5c [3.032765505s]
Aug 26 16:57:01.139: INFO: Created: latency-svc-q67sj
Aug 26 16:57:01.181: INFO: Got endpoints: latency-svc-q67sj [3.285632036s]
Aug 26 16:57:01.319: INFO: Created: latency-svc-hqtf5
Aug 26 16:57:01.575: INFO: Got endpoints: latency-svc-hqtf5 [3.679227014s]
Aug 26 16:57:01.920: INFO: Created: latency-svc-vxvds
Aug 26 16:57:01.959: INFO: Got endpoints: latency-svc-vxvds [4.063673606s]
Aug 26 16:57:02.357: INFO: Created: latency-svc-mmdnv
Aug 26 16:57:02.549: INFO: Got endpoints: latency-svc-mmdnv [4.653723404s]
Aug 26 16:57:02.855: INFO: Created: latency-svc-nffmf
Aug 26 16:57:02.928: INFO: Got endpoints: latency-svc-nffmf [5.032207357s]
Aug 26 16:57:03.088: INFO: Created: latency-svc-dmgvv
Aug 26 16:57:03.143: INFO: Got endpoints: latency-svc-dmgvv [5.247695282s]
Aug 26 16:57:03.282: INFO: Created: latency-svc-4btkc
Aug 26 16:57:03.288: INFO: Got endpoints: latency-svc-4btkc [5.392173966s]
Aug 26 16:57:03.667: INFO: Created: latency-svc-n454v
Aug 26 16:57:03.720: INFO: Got endpoints: latency-svc-n454v [5.82396229s]
Aug 26 16:57:04.183: INFO: Created: latency-svc-72z2j
Aug 26 16:57:04.199: INFO: Got endpoints: latency-svc-72z2j [6.303198746s]
Aug 26 16:57:04.435: INFO: Created: latency-svc-c2rrm
Aug 26 16:57:04.469: INFO: Got endpoints: latency-svc-c2rrm [6.572746859s]
Aug 26 16:57:04.591: INFO: Created: latency-svc-z6dgq
Aug 26 16:57:04.615: INFO: Got endpoints: latency-svc-z6dgq [6.718923502s]
Aug 26 16:57:04.805: INFO: Created: latency-svc-md9wr
Aug 26 16:57:04.810: INFO: Got endpoints: latency-svc-md9wr [341.158436ms]
Aug 26 16:57:04.850: INFO: Created: latency-svc-tkn2r
Aug 26 16:57:04.974: INFO: Got endpoints: latency-svc-tkn2r [7.078127356s]
Aug 26 16:57:04.982: INFO: Created: latency-svc-8kkv2
Aug 26 16:57:05.037: INFO: Got endpoints: latency-svc-8kkv2 [5.306593554s]
Aug 26 16:57:05.233: INFO: Created: latency-svc-lrhwz
Aug 26 16:57:05.279: INFO: Got endpoints: latency-svc-lrhwz [4.577112518s]
Aug 26 16:57:05.752: INFO: Created: latency-svc-dc49d
Aug 26 16:57:05.997: INFO: Got endpoints: latency-svc-dc49d [5.068425717s]
Aug 26 16:57:06.306: INFO: Created: latency-svc-2tljl
Aug 26 16:57:06.345: INFO: Got endpoints: latency-svc-2tljl [5.163891444s]
Aug 26 16:57:06.658: INFO: Created: latency-svc-4877d
Aug 26 16:57:07.059: INFO: Got endpoints: latency-svc-4877d [5.483536423s]
Aug 26 16:57:07.061: INFO: Created: latency-svc-q7qpw
Aug 26 16:57:07.098: INFO: Got endpoints: latency-svc-q7qpw [5.138327482s]
Aug 26 16:57:07.266: INFO: Created: latency-svc-4fkml
Aug 26 16:57:07.685: INFO: Got endpoints: latency-svc-4fkml [5.135376727s]
Aug 26 16:57:08.081: INFO: Created: latency-svc-l2vfr
Aug 26 16:57:08.089: INFO: Got endpoints: latency-svc-l2vfr [5.161340772s]
Aug 26 16:57:08.292: INFO: Created: latency-svc-tcz59
Aug 26 16:57:08.304: INFO: Got endpoints: latency-svc-tcz59 [5.160706831s]
Aug 26 16:57:08.391: INFO: Created: latency-svc-xx5nf
Aug 26 16:57:08.435: INFO: Got endpoints: latency-svc-xx5nf [5.146907256s]
Aug 26 16:57:08.545: INFO: Created: latency-svc-m6hxf
Aug 26 16:57:08.583: INFO: Got endpoints: latency-svc-m6hxf [4.863543001s]
Aug 26 16:57:08.722: INFO: Created: latency-svc-bqshp
Aug 26 16:57:08.758: INFO: Got endpoints: latency-svc-bqshp [4.559304157s]
Aug 26 16:57:08.975: INFO: Created: latency-svc-5kmkl
Aug 26 16:57:09.000: INFO: Got endpoints: latency-svc-5kmkl [4.384869859s]
Aug 26 16:57:09.931: INFO: Created: latency-svc-lqrl2
Aug 26 16:57:09.974: INFO: Got endpoints: latency-svc-lqrl2 [5.164547748s]
Aug 26 16:57:10.467: INFO: Created: latency-svc-zltm6
Aug 26 16:57:10.674: INFO: Got endpoints: latency-svc-zltm6 [5.700783572s]
Aug 26 16:57:10.690: INFO: Created: latency-svc-2kwpx
Aug 26 16:57:10.721: INFO: Got endpoints: latency-svc-2kwpx [5.684285609s]
Aug 26 16:57:11.164: INFO: Created: latency-svc-tb72b
Aug 26 16:57:11.344: INFO: Got endpoints: latency-svc-tb72b [6.064895969s]
Aug 26 16:57:11.533: INFO: Created: latency-svc-qrcnw
Aug 26 16:57:11.557: INFO: Got endpoints: latency-svc-qrcnw [5.55986123s]
Aug 26 16:57:11.728: INFO: Created: latency-svc-4v9cj
Aug 26 16:57:11.778: INFO: Got endpoints: latency-svc-4v9cj [5.432392732s]
Aug 26 16:57:11.944: INFO: Created: latency-svc-n66q6
Aug 26 16:57:11.987: INFO: Got endpoints: latency-svc-n66q6 [4.928817637s]
Aug 26 16:57:12.135: INFO: Created: latency-svc-x7kcp
Aug 26 16:57:12.139: INFO: Got endpoints: latency-svc-x7kcp [5.041196889s]
Aug 26 16:57:12.560: INFO: Created: latency-svc-bslf5
Aug 26 16:57:12.624: INFO: Got endpoints: latency-svc-bslf5 [4.938817989s]
Aug 26 16:57:12.985: INFO: Created: latency-svc-tgq7w
Aug 26 16:57:13.410: INFO: Got endpoints: latency-svc-tgq7w [5.320910452s]
Aug 26 16:57:13.932: INFO: Created: latency-svc-648rp
Aug 26 16:57:14.146: INFO: Got endpoints: latency-svc-648rp [5.841130463s]
Aug 26 16:57:14.881: INFO: Created: latency-svc-f7tf8
Aug 26 16:57:15.026: INFO: Got endpoints: latency-svc-f7tf8 [6.591375655s]
Aug 26 16:57:15.308: INFO: Created: latency-svc-v4w6x
Aug 26 16:57:15.453: INFO: Got endpoints: latency-svc-v4w6x [6.869628498s]
Aug 26 16:57:15.501: INFO: Created: latency-svc-l2kkm
Aug 26 16:57:15.756: INFO: Got endpoints: latency-svc-l2kkm [6.998277205s]
Aug 26 16:57:15.789: INFO: Created: latency-svc-4tvwv
Aug 26 16:57:15.979: INFO: Got endpoints: latency-svc-4tvwv [6.979385531s]
Aug 26 16:57:16.078: INFO: Created: latency-svc-lm7gx
Aug 26 16:57:16.260: INFO: Got endpoints: latency-svc-lm7gx [6.285880583s]
Aug 26 16:57:16.355: INFO: Created: latency-svc-fn2qd
Aug 26 16:57:16.495: INFO: Got endpoints: latency-svc-fn2qd [5.820274296s]
Aug 26 16:57:16.498: INFO: Created: latency-svc-8br88
Aug 26 16:57:16.543: INFO: Got endpoints: latency-svc-8br88 [5.821425797s]
Aug 26 16:57:16.757: INFO: Created: latency-svc-x2l9k
Aug 26 16:57:16.774: INFO: Got endpoints: latency-svc-x2l9k [5.429934695s]
Aug 26 16:57:17.328: INFO: Created: latency-svc-pwccf
Aug 26 16:57:17.731: INFO: Got endpoints: latency-svc-pwccf [6.173862339s]
Aug 26 16:57:17.962: INFO: Created: latency-svc-82zgs
Aug 26 16:57:18.360: INFO: Got endpoints: latency-svc-82zgs [6.582137389s]
Aug 26 16:57:18.931: INFO: Created: latency-svc-qvxtp
Aug 26 16:57:19.353: INFO: Got endpoints: latency-svc-qvxtp [7.365190884s]
Aug 26 16:57:21.534: INFO: Created: latency-svc-dwlhw
Aug 26 16:57:22.017: INFO: Got endpoints: latency-svc-dwlhw [9.877745961s]
Aug 26 16:57:22.273: INFO: Created: latency-svc-nl9k2
Aug 26 16:57:22.739: INFO: Got endpoints: latency-svc-nl9k2 [10.11513758s]
Aug 26 16:57:23.167: INFO: Created: latency-svc-zh7h6
Aug 26 16:57:23.207: INFO: Got endpoints: latency-svc-zh7h6 [9.796860935s]
Aug 26 16:57:24.511: INFO: Created: latency-svc-ngj8v
Aug 26 16:57:25.159: INFO: Got endpoints: latency-svc-ngj8v [11.013657452s]
Aug 26 16:57:25.733: INFO: Created: latency-svc-kjk72
Aug 26 16:57:25.797: INFO: Got endpoints: latency-svc-kjk72 [10.770724477s]
Aug 26 16:57:26.369: INFO: Created: latency-svc-hkvgg
Aug 26 16:57:26.411: INFO: Got endpoints: latency-svc-hkvgg [10.95826101s]
Aug 26 16:57:26.719: INFO: Created: latency-svc-whrfz
Aug 26 16:57:27.231: INFO: Got endpoints: latency-svc-whrfz [11.474202241s]
Aug 26 16:57:27.560: INFO: Created: latency-svc-w9948
Aug 26 16:57:27.769: INFO: Got endpoints: latency-svc-w9948 [11.789658142s]
Aug 26 16:57:27.943: INFO: Created: latency-svc-7sm6s
Aug 26 16:57:28.326: INFO: Got endpoints: latency-svc-7sm6s [12.065056293s]
Aug 26 16:57:28.772: INFO: Created: latency-svc-n2mfb
Aug 26 16:57:28.863: INFO: Got endpoints: latency-svc-n2mfb [12.36837177s]
Aug 26 16:57:29.386: INFO: Created: latency-svc-8nwmh
Aug 26 16:57:29.716: INFO: Got endpoints: latency-svc-8nwmh [13.173742666s]
Aug 26 16:57:29.961: INFO: Created: latency-svc-ktjv9
Aug 26 16:57:29.965: INFO: Got endpoints: latency-svc-ktjv9 [13.191670495s]
Aug 26 16:57:30.223: INFO: Created: latency-svc-pjbd5
Aug 26 16:57:30.240: INFO: Got endpoints: latency-svc-pjbd5 [12.509406689s]
Aug 26 16:57:30.385: INFO: Created: latency-svc-5tqfr
Aug 26 16:57:30.691: INFO: Created: latency-svc-s4bvf
Aug 26 16:57:30.691: INFO: Got endpoints: latency-svc-5tqfr [12.331146794s]
Aug 26 16:57:30.937: INFO: Got endpoints: latency-svc-s4bvf [11.584171089s]
Aug 26 16:57:31.243: INFO: Created: latency-svc-pbkpk
Aug 26 16:57:31.299: INFO: Got endpoints: latency-svc-pbkpk [9.281723311s]
Aug 26 16:57:31.662: INFO: Created: latency-svc-f8r9n
Aug 26 16:57:31.719: INFO: Got endpoints: latency-svc-f8r9n [8.980190389s]
Aug 26 16:57:31.880: INFO: Created: latency-svc-gd8th
Aug 26 16:57:32.063: INFO: Got endpoints: latency-svc-gd8th [8.855732991s]
Aug 26 16:57:32.065: INFO: Created: latency-svc-tvp7t
Aug 26 16:57:32.083: INFO: Got endpoints: latency-svc-tvp7t [6.92399129s]
Aug 26 16:57:32.427: INFO: Created: latency-svc-bw25p
Aug 26 16:57:32.883: INFO: Got endpoints: latency-svc-bw25p [7.086087034s]
Aug 26 16:57:32.893: INFO: Created: latency-svc-spll4
Aug 26 16:57:32.946: INFO: Got endpoints: latency-svc-spll4 [6.535073217s]
Aug 26 16:57:33.723: INFO: Created: latency-svc-fhx7q
Aug 26 16:57:33.860: INFO: Got endpoints: latency-svc-fhx7q [6.628884536s]
Aug 26 16:57:33.933: INFO: Created: latency-svc-zshnv
Aug 26 16:57:34.231: INFO: Got endpoints: latency-svc-zshnv [6.461805994s]
Aug 26 16:57:34.416: INFO: Created: latency-svc-cgl7l
Aug 26 16:57:34.590: INFO: Got endpoints: latency-svc-cgl7l [6.264183668s]
Aug 26 16:57:34.666: INFO: Created: latency-svc-w8vn4
Aug 26 16:57:34.769: INFO: Got endpoints: latency-svc-w8vn4 [5.906127055s]
Aug 26 16:57:34.811: INFO: Created: latency-svc-4nns7
Aug 26 16:57:34.864: INFO: Got endpoints: latency-svc-4nns7 [5.147259232s]
Aug 26 16:57:34.998: INFO: Created: latency-svc-g74k8
Aug 26 16:57:35.195: INFO: Got endpoints: latency-svc-g74k8 [5.229932034s]
Aug 26 16:57:35.453: INFO: Created: latency-svc-fs526
Aug 26 16:57:36.278: INFO: Got endpoints: latency-svc-fs526 [6.03762466s]
Aug 26 16:57:37.099: INFO: Created: latency-svc-lngww
Aug 26 16:57:37.568: INFO: Got endpoints: latency-svc-lngww [6.876490722s]
Aug 26 16:57:38.315: INFO: Created: latency-svc-mk6tq
Aug 26 16:57:38.800: INFO: Got endpoints: latency-svc-mk6tq [7.863007081s]
Aug 26 16:57:38.809: INFO: Created: latency-svc-8ldgk
Aug 26 16:57:39.152: INFO: Got endpoints: latency-svc-8ldgk [7.853385652s]
Aug 26 16:57:39.249: INFO: Created: latency-svc-zw2gj
Aug 26 16:57:39.461: INFO: Got endpoints: latency-svc-zw2gj [7.742116409s]
Aug 26 16:57:39.632: INFO: Created: latency-svc-jp6nh
Aug 26 16:57:39.637: INFO: Got endpoints: latency-svc-jp6nh [7.573817071s]
Aug 26 16:57:40.601: INFO: Created: latency-svc-lns85
Aug 26 16:57:40.690: INFO: Got endpoints: latency-svc-lns85 [8.606514494s]
Aug 26 16:57:41.142: INFO: Created: latency-svc-tnwrd
Aug 26 16:57:41.199: INFO: Got endpoints: latency-svc-tnwrd [8.315987782s]
Aug 26 16:57:41.565: INFO: Created: latency-svc-c686d
Aug 26 16:57:41.576: INFO: Got endpoints: latency-svc-c686d [8.629650453s]
Aug 26 16:57:41.643: INFO: Created: latency-svc-plhsc
Aug 26 16:57:42.159: INFO: Got endpoints: latency-svc-plhsc [8.299540149s]
Aug 26 16:57:42.529: INFO: Created: latency-svc-4kgd5
Aug 26 16:57:42.530: INFO: Got endpoints: latency-svc-4kgd5 [8.299115803s]
Aug 26 16:57:43.421: INFO: Created: latency-svc-d579b
Aug 26 16:57:43.458: INFO: Got endpoints: latency-svc-d579b [8.868253019s]
Aug 26 16:57:44.272: INFO: Created: latency-svc-gxzkw
Aug 26 16:57:44.342: INFO: Got endpoints: latency-svc-gxzkw [9.572445482s]
Aug 26 16:57:44.524: INFO: Created: latency-svc-gtf64
Aug 26 16:57:44.739: INFO: Got endpoints: latency-svc-gtf64 [9.875074361s]
Aug 26 16:57:45.053: INFO: Created: latency-svc-6pvdg
Aug 26 16:57:45.071: INFO: Got endpoints: latency-svc-6pvdg [9.875425835s]
Aug 26 16:57:45.120: INFO: Created: latency-svc-nfwvn
Aug 26 16:57:45.242: INFO: Got endpoints: latency-svc-nfwvn [8.963980537s]
Aug 26 16:57:45.243: INFO: Created: latency-svc-dvpfx
Aug 26 16:57:45.296: INFO: Got endpoints: latency-svc-dvpfx [7.728195422s]
Aug 26 16:57:45.426: INFO: Created: latency-svc-gsw2w
Aug 26 16:57:45.619: INFO: Created: latency-svc-8thrv
Aug 26 16:57:45.619: INFO: Got endpoints: latency-svc-gsw2w [6.818764935s]
Aug 26 16:57:45.622: INFO: Got endpoints: latency-svc-8thrv [6.46999951s]
Aug 26 16:57:46.244: INFO: Created: latency-svc-jsj4s
Aug 26 16:57:46.757: INFO: Got endpoints: latency-svc-jsj4s [7.295304727s]
Aug 26 16:57:47.052: INFO: Created: latency-svc-98scx
Aug 26 16:57:47.105: INFO: Got endpoints: latency-svc-98scx [7.468668431s]
Aug 26 16:57:47.401: INFO: Created: latency-svc-gxwfj
Aug 26 16:57:47.812: INFO: Got endpoints: latency-svc-gxwfj [7.12178805s]
Aug 26 16:57:48.242: INFO: Created: latency-svc-k4vdl
Aug 26 16:57:48.447: INFO: Got endpoints: latency-svc-k4vdl [7.247546286s]
Aug 26 16:57:48.447: INFO: Created: latency-svc-67svf
Aug 26 16:57:49.125: INFO: Got endpoints: latency-svc-67svf [7.548437007s]
Aug 26 16:57:49.446: INFO: Created: latency-svc-s9wff
Aug 26 16:57:49.942: INFO: Got endpoints: latency-svc-s9wff [7.782943225s]
Aug 26 16:57:50.458: INFO: Created: latency-svc-fvhpk
Aug 26 16:57:51.069: INFO: Got endpoints: latency-svc-fvhpk [8.539457846s]
Aug 26 16:57:51.853: INFO: Created: latency-svc-frmz5
Aug 26 16:57:53.067: INFO: Created: latency-svc-5g2hn
Aug 26 16:57:53.069: INFO: Got endpoints: latency-svc-frmz5 [9.611262851s]
Aug 26 16:57:53.577: INFO: Got endpoints: latency-svc-5g2hn [9.23536043s]
Aug 26 16:57:53.622: INFO: Created: latency-svc-fwssf
Aug 26 16:57:53.671: INFO: Got endpoints: latency-svc-fwssf [8.932326062s]
Aug 26 16:57:55.347: INFO: Created: latency-svc-4dbkw
Aug 26 16:57:55.426: INFO: Got endpoints: latency-svc-4dbkw [10.354824697s]
Aug 26 16:57:56.112: INFO: Created: latency-svc-7dqn6
Aug 26 16:57:56.470: INFO: Got endpoints: latency-svc-7dqn6 [11.228514546s]
Aug 26 16:57:56.800: INFO: Created: latency-svc-72cqr
Aug 26 16:57:56.888: INFO: Got endpoints: latency-svc-72cqr [11.59187799s]
Aug 26 16:57:57.791: INFO: Created: latency-svc-c9s6l
Aug 26 16:57:58.219: INFO: Got endpoints: latency-svc-c9s6l [12.600280418s]
Aug 26 16:57:58.890: INFO: Created: latency-svc-mvvbm
Aug 26 16:57:59.202: INFO: Got endpoints: latency-svc-mvvbm [13.579548453s]
Aug 26 16:57:59.698: INFO: Created: latency-svc-vmj4b
Aug 26 16:58:00.471: INFO: Got endpoints: latency-svc-vmj4b [13.713813723s]
Aug 26 16:58:01.208: INFO: Created: latency-svc-7wzq8
Aug 26 16:58:01.495: INFO: Created: latency-svc-trsbj
Aug 26 16:58:01.496: INFO: Got endpoints: latency-svc-7wzq8 [14.390561147s]
Aug 26 16:58:01.831: INFO: Got endpoints: latency-svc-trsbj [14.019386071s]
Aug 26 16:58:02.657: INFO: Created: latency-svc-f8hvq
Aug 26 16:58:02.661: INFO: Got endpoints: latency-svc-f8hvq [14.214033806s]
Aug 26 16:58:03.357: INFO: Created: latency-svc-5x2d2
Aug 26 16:58:03.716: INFO: Got endpoints: latency-svc-5x2d2 [14.591157444s]
Aug 26 16:58:04.148: INFO: Created: latency-svc-qstcm
Aug 26 16:58:04.538: INFO: Got endpoints: latency-svc-qstcm [14.595335413s]
Aug 26 16:58:05.483: INFO: Created: latency-svc-fc7kw
Aug 26 16:58:05.488: INFO: Got endpoints: latency-svc-fc7kw [14.418552274s]
Aug 26 16:58:05.973: INFO: Created: latency-svc-fv29n
Aug 26 16:58:06.065: INFO: Got endpoints: latency-svc-fv29n [12.995469973s]
Aug 26 16:58:07.112: INFO: Created: latency-svc-gkns2
Aug 26 16:58:07.120: INFO: Got endpoints: latency-svc-gkns2 [13.542109149s]
Aug 26 16:58:07.572: INFO: Created: latency-svc-dmqfp
Aug 26 16:58:08.093: INFO: Got endpoints: latency-svc-dmqfp [14.421587012s]
Aug 26 16:58:08.183: INFO: Created: latency-svc-v6r99
Aug 26 16:58:08.443: INFO: Got endpoints: latency-svc-v6r99 [13.016699031s]
Aug 26 16:58:09.604: INFO: Created: latency-svc-dxtn7
Aug 26 16:58:09.886: INFO: Got endpoints: latency-svc-dxtn7 [13.415016386s]
Aug 26 16:58:09.929: INFO: Created: latency-svc-qq7gr
Aug 26 16:58:10.106: INFO: Got endpoints: latency-svc-qq7gr [13.218436974s]
Aug 26 16:58:10.198: INFO: Created: latency-svc-s2s9d
Aug 26 16:58:10.375: INFO: Got endpoints: latency-svc-s2s9d [12.155300756s]
Aug 26 16:58:10.438: INFO: Created: latency-svc-mqszk
Aug 26 16:58:10.459: INFO: Got endpoints: latency-svc-mqszk [11.25713554s]
Aug 26 16:58:10.684: INFO: Created: latency-svc-vfssk
Aug 26 16:58:10.831: INFO: Got endpoints: latency-svc-vfssk [10.359865494s]
Aug 26 16:58:10.833: INFO: Created: latency-svc-58xbp
Aug 26 16:58:10.843: INFO: Got endpoints: latency-svc-58xbp [9.34704125s]
Aug 26 16:58:10.902: INFO: Created: latency-svc-pgbcr
Aug 26 16:58:11.092: INFO: Got endpoints: latency-svc-pgbcr [9.260838421s]
Aug 26 16:58:11.340: INFO: Created: latency-svc-x6b7k
Aug 26 16:58:11.345: INFO: Got endpoints: latency-svc-x6b7k [8.683544977s]
Aug 26 16:58:11.934: INFO: Created: latency-svc-5bdqz
Aug 26 16:58:11.994: INFO: Got endpoints: latency-svc-5bdqz [8.278466453s]
Aug 26 16:58:12.176: INFO: Created: latency-svc-l6qh6
Aug 26 16:58:12.181: INFO: Got endpoints: latency-svc-l6qh6 [7.642926549s]
Aug 26 16:58:12.663: INFO: Created: latency-svc-9nhj6
Aug 26 16:58:12.702: INFO: Got endpoints: latency-svc-9nhj6 [7.213687954s]
Aug 26 16:58:12.907: INFO: Created: latency-svc-rvflg
Aug 26 16:58:13.040: INFO: Got endpoints: latency-svc-rvflg [6.974654794s]
Aug 26 16:58:13.086: INFO: Created: latency-svc-rg7ql
Aug 26 16:58:13.261: INFO: Got endpoints: latency-svc-rg7ql [6.140879905s]
Aug 26 16:58:13.447: INFO: Created: latency-svc-zcwkt
Aug 26 16:58:13.475: INFO: Got endpoints: latency-svc-zcwkt [5.382015729s]
Aug 26 16:58:13.519: INFO: Created: latency-svc-nlnfm
Aug 26 16:58:13.535: INFO: Got endpoints: latency-svc-nlnfm [5.092486972s]
Aug 26 16:58:13.619: INFO: Created: latency-svc-4q9w5
Aug 26 16:58:13.705: INFO: Got endpoints: latency-svc-4q9w5 [3.819076838s]
Aug 26 16:58:13.884: INFO: Created: latency-svc-tq597
Aug 26 16:58:13.932: INFO: Got endpoints: latency-svc-tq597 [3.825227053s]
Aug 26 16:58:14.455: INFO: Created: latency-svc-fxz2d
Aug 26 16:58:14.866: INFO: Got endpoints: latency-svc-fxz2d [4.491497787s]
Aug 26 16:58:15.031: INFO: Created: latency-svc-z45m8
Aug 26 16:58:15.376: INFO: Got endpoints: latency-svc-z45m8 [4.916857793s]
Aug 26 16:58:15.547: INFO: Created: latency-svc-jmqq7
Aug 26 16:58:15.825: INFO: Got endpoints: latency-svc-jmqq7 [4.994386157s]
Aug 26 16:58:15.827: INFO: Created: latency-svc-d2wwg
Aug 26 16:58:15.895: INFO: Got endpoints: latency-svc-d2wwg [5.051662531s]
Aug 26 16:58:16.098: INFO: Created: latency-svc-9nmww
Aug 26 16:58:16.128: INFO: Got endpoints: latency-svc-9nmww [5.036051549s]
Aug 26 16:58:16.324: INFO: Created: latency-svc-bjdm7
Aug 26 16:58:16.368: INFO: Got endpoints: latency-svc-bjdm7 [5.023142934s]
Aug 26 16:58:16.577: INFO: Created: latency-svc-s9vpg
Aug 26 16:58:16.758: INFO: Got endpoints: latency-svc-s9vpg [4.763683725s]
Aug 26 16:58:16.759: INFO: Created: latency-svc-vbnfs
Aug 26 16:58:16.833: INFO: Got endpoints: latency-svc-vbnfs [4.652425785s]
Aug 26 16:58:16.977: INFO: Created: latency-svc-cxjtd
Aug 26 16:58:17.033: INFO: Got endpoints: latency-svc-cxjtd [4.33154552s]
Aug 26 16:58:17.702: INFO: Created: latency-svc-kzxwv
Aug 26 16:58:17.817: INFO: Got endpoints: latency-svc-kzxwv [4.777045266s]
Aug 26 16:58:18.065: INFO: Created: latency-svc-dlf9k
Aug 26 16:58:18.070: INFO: Got endpoints: latency-svc-dlf9k [4.809127493s]
Aug 26 16:58:18.418: INFO: Created: latency-svc-fkcsf
Aug 26 16:58:18.909: INFO: Got endpoints: latency-svc-fkcsf [5.434057282s]
Aug 26 16:58:18.970: INFO: Created: latency-svc-j4bx7
Aug 26 16:58:19.111: INFO: Got endpoints: latency-svc-j4bx7 [5.575645391s]
Aug 26 16:58:19.143: INFO: Created: latency-svc-2gbnw
Aug 26 16:58:19.191: INFO: Got endpoints: latency-svc-2gbnw [5.486097668s]
Aug 26 16:58:19.766: INFO: Created: latency-svc-rc8cf
Aug 26 16:58:20.189: INFO: Got endpoints: latency-svc-rc8cf [6.257278114s]
Aug 26 16:58:20.190: INFO: Created: latency-svc-l56db
Aug 26 16:58:20.418: INFO: Got endpoints: latency-svc-l56db [5.55126989s]
Aug 26 16:58:21.190: INFO: Created: latency-svc-rx8cq
Aug 26 16:58:21.195: INFO: Got endpoints: latency-svc-rx8cq [5.819193808s]
Aug 26 16:58:21.476: INFO: Created: latency-svc-l89qj
Aug 26 16:58:21.504: INFO: Got endpoints: latency-svc-l89qj [5.678740447s]
Aug 26 16:58:21.547: INFO: Created: latency-svc-t25b9
Aug 26 16:58:21.551: INFO: Got endpoints: latency-svc-t25b9 [5.656063078s]
Aug 26 16:58:21.687: INFO: Created: latency-svc-qgcl7
Aug 26 16:58:21.997: INFO: Got endpoints: latency-svc-qgcl7 [5.868405113s]
Aug 26 16:58:22.513: INFO: Created: latency-svc-2rtvt
Aug 26 16:58:22.705: INFO: Got endpoints: latency-svc-2rtvt [6.337262089s]
Aug 26 16:58:22.718: INFO: Created: latency-svc-zx8jx
Aug 26 16:58:22.729: INFO: Got endpoints: latency-svc-zx8jx [5.971021469s]
Aug 26 16:58:24.014: INFO: Created: latency-svc-dzfzp
Aug 26 16:58:24.065: INFO: Got endpoints: latency-svc-dzfzp [7.231515267s]
Aug 26 16:58:24.338: INFO: Created: latency-svc-nhbl7
Aug 26 16:58:24.368: INFO: Created: latency-svc-st2d7
Aug 26 16:58:24.369: INFO: Got endpoints: latency-svc-nhbl7 [7.335397849s]
Aug 26 16:58:24.538: INFO: Got endpoints: latency-svc-st2d7 [6.720972866s]
Aug 26 16:58:24.661: INFO: Created: latency-svc-x7w9b
Aug 26 16:58:24.687: INFO: Got endpoints: latency-svc-x7w9b [6.617395287s]
Aug 26 16:58:24.710: INFO: Created: latency-svc-v7pq7
Aug 26 16:58:24.718: INFO: Got endpoints: latency-svc-v7pq7 [5.809291677s]
Aug 26 16:58:24.753: INFO: Created: latency-svc-lmwrn
Aug 26 16:58:24.761: INFO: Got endpoints: latency-svc-lmwrn [5.649866183s]
Aug 26 16:58:24.848: INFO: Created: latency-svc-rxftr
Aug 26 16:58:24.873: INFO: Got endpoints: latency-svc-rxftr [5.682197155s]
Aug 26 16:58:24.909: INFO: Created: latency-svc-vr6pn
Aug 26 16:58:24.924: INFO: Got endpoints: latency-svc-vr6pn [4.735371347s]
Aug 26 16:58:25.063: INFO: Created: latency-svc-pgvwl
Aug 26 16:58:25.073: INFO: Got endpoints: latency-svc-pgvwl [4.655094016s]
Aug 26 16:58:25.131: INFO: Created: latency-svc-nrjlk
Aug 26 16:58:25.238: INFO: Got endpoints: latency-svc-nrjlk [4.042710194s]
Aug 26 16:58:25.257: INFO: Created: latency-svc-jclqn
Aug 26 16:58:25.272: INFO: Got endpoints: latency-svc-jclqn [3.767763756s]
Aug 26 16:58:25.603: INFO: Created: latency-svc-gn8t2
Aug 26 16:58:25.607: INFO: Got endpoints: latency-svc-gn8t2 [4.056347106s]
Aug 26 16:58:25.765: INFO: Created: latency-svc-8np8g
Aug 26 16:58:25.863: INFO: Got endpoints: latency-svc-8np8g [3.866108154s]
Aug 26 16:58:26.032: INFO: Created: latency-svc-47kw5
Aug 26 16:58:26.069: INFO: Got endpoints: latency-svc-47kw5 [3.363943233s]
Aug 26 16:58:26.329: INFO: Created: latency-svc-6mwtz
Aug 26 16:58:26.421: INFO: Got endpoints: latency-svc-6mwtz [3.692010975s]
Aug 26 16:58:26.554: INFO: Created: latency-svc-6mcc5
Aug 26 16:58:26.569: INFO: Got endpoints: latency-svc-6mcc5 [2.503816042s]
Aug 26 16:58:26.613: INFO: Created: latency-svc-ftzrr
Aug 26 16:58:26.650: INFO: Got endpoints: latency-svc-ftzrr [2.281242395s]
Aug 26 16:58:26.717: INFO: Created: latency-svc-s6qjx
Aug 26 16:58:26.740: INFO: Created: latency-svc-sq2gf
Aug 26 16:58:26.741: INFO: Got endpoints: latency-svc-s6qjx [2.202969101s]
Aug 26 16:58:26.794: INFO: Got endpoints: latency-svc-sq2gf [2.10703725s]
Aug 26 16:58:26.867: INFO: Created: latency-svc-hshss
Aug 26 16:58:26.896: INFO: Got endpoints: latency-svc-hshss [2.177661912s]
Aug 26 16:58:26.933: INFO: Created: latency-svc-hlrmr
Aug 26 16:58:27.016: INFO: Got endpoints: latency-svc-hlrmr [2.255134789s]
Aug 26 16:58:27.053: INFO: Created: latency-svc-gwkkx
Aug 26 16:58:27.069: INFO: Got endpoints: latency-svc-gwkkx [2.19595105s]
Aug 26 16:58:27.100: INFO: Created: latency-svc-k7wdr
Aug 26 16:58:27.111: INFO: Got endpoints: latency-svc-k7wdr [2.186480355s]
Aug 26 16:58:27.158: INFO: Created: latency-svc-lrlkx
Aug 26 16:58:27.179: INFO: Got endpoints: latency-svc-lrlkx [2.105608537s]
Aug 26 16:58:27.209: INFO: Created: latency-svc-9d9bn
Aug 26 16:58:27.225: INFO: Got endpoints: latency-svc-9d9bn [1.987631736s]
Aug 26 16:58:27.244: INFO: Created: latency-svc-4hk74
Aug 26 16:58:27.298: INFO: Got endpoints: latency-svc-4hk74 [2.026180392s]
Aug 26 16:58:27.299: INFO: Created: latency-svc-xpbzp
Aug 26 16:58:27.316: INFO: Got endpoints: latency-svc-xpbzp [1.708606624s]
Aug 26 16:58:27.347: INFO: Created: latency-svc-ch8tz
Aug 26 16:58:27.366: INFO: Got endpoints: latency-svc-ch8tz [1.502553493s]
Aug 26 16:58:27.463: INFO: Created: latency-svc-vrm8m
Aug 26 16:58:27.515: INFO: Created: latency-svc-vtzkv
Aug 26 16:58:27.515: INFO: Got endpoints: latency-svc-vrm8m [1.445654444s]
Aug 26 16:58:27.526: INFO: Got endpoints: latency-svc-vtzkv [1.104969153s]
Aug 26 16:58:27.550: INFO: Created: latency-svc-cpzt2
Aug 26 16:58:27.563: INFO: Got endpoints: latency-svc-cpzt2 [994.431752ms]
Aug 26 16:58:27.627: INFO: Created: latency-svc-8jz2f
Aug 26 16:58:27.631: INFO: Got endpoints: latency-svc-8jz2f [981.262595ms]
Aug 26 16:58:27.670: INFO: Created: latency-svc-zblvt
Aug 26 16:58:27.690: INFO: Got endpoints: latency-svc-zblvt [948.774144ms]
Aug 26 16:58:27.725: INFO: Created: latency-svc-44ctl
Aug 26 16:58:27.787: INFO: Got endpoints: latency-svc-44ctl [992.918855ms]
Aug 26 16:58:27.790: INFO: Created: latency-svc-p985s
Aug 26 16:58:27.798: INFO: Got endpoints: latency-svc-p985s [901.413148ms]
Aug 26 16:58:27.874: INFO: Created: latency-svc-vqmx8
Aug 26 16:58:28.058: INFO: Got endpoints: latency-svc-vqmx8 [1.042334863s]
Aug 26 16:58:28.063: INFO: Created: latency-svc-2g5kq
Aug 26 16:58:28.080: INFO: Got endpoints: latency-svc-2g5kq [1.010836928s]
Aug 26 16:58:28.120: INFO: Created: latency-svc-n9cd2
Aug 26 16:58:28.146: INFO: Got endpoints: latency-svc-n9cd2 [1.034960774s]
Aug 26 16:58:28.146: INFO: Latencies: [341.158436ms 901.413148ms 948.774144ms 981.262595ms 992.918855ms 994.431752ms 1.010836928s 1.034960774s 1.042334863s 1.104969153s 1.445654444s 1.502553493s 1.708606624s 1.834706351s 1.987631736s 2.026180392s 2.105608537s 2.10703725s 2.177661912s 2.186480355s 2.19595105s 2.202969101s 2.255134789s 2.281242395s 2.503816042s 2.806068757s 3.032765505s 3.285632036s 3.363943233s 3.679227014s 3.692010975s 3.767763756s 3.819076838s 3.825227053s 3.866108154s 4.042710194s 4.056347106s 4.063673606s 4.33154552s 4.384869859s 4.491497787s 4.559304157s 4.577112518s 4.652425785s 4.653723404s 4.655094016s 4.735371347s 4.763683725s 4.777045266s 4.809127493s 4.863543001s 4.916857793s 4.928817637s 4.938817989s 4.994386157s 5.023142934s 5.032207357s 5.036051549s 5.041196889s 5.051662531s 5.068425717s 5.092486972s 5.135376727s 5.138327482s 5.146907256s 5.147259232s 5.160706831s 5.161340772s 5.163891444s 5.164547748s 5.229932034s 5.247695282s 5.306593554s 5.320910452s 5.382015729s 5.392173966s 5.429934695s 5.432392732s 5.434057282s 5.483536423s 5.486097668s 5.55126989s 5.55986123s 5.575645391s 5.649866183s 5.656063078s 5.678740447s 5.682197155s 5.684285609s 5.700783572s 5.809291677s 5.819193808s 5.820274296s 5.821425797s 5.82396229s 5.841130463s 5.868405113s 5.906127055s 5.971021469s 6.03762466s 6.064895969s 6.140879905s 6.173862339s 6.257278114s 6.264183668s 6.285880583s 6.303198746s 6.337262089s 6.461805994s 6.46999951s 6.535073217s 6.572746859s 6.582137389s 6.591375655s 6.617395287s 6.628884536s 6.718923502s 6.720972866s 6.818764935s 6.869628498s 6.876490722s 6.92399129s 6.974654794s 6.979385531s 6.998277205s 7.078127356s 7.086087034s 7.12178805s 7.213687954s 7.231515267s 7.247546286s 7.295304727s 7.335397849s 7.365190884s 7.468668431s 7.548437007s 7.573817071s 7.642926549s 7.728195422s 7.742116409s 7.782943225s 7.853385652s 7.863007081s 8.278466453s 8.299115803s 8.299540149s 8.315987782s 8.539457846s 8.606514494s 8.629650453s 8.683544977s 8.855732991s 8.868253019s 8.932326062s 8.963980537s 8.980190389s 9.23536043s 9.260838421s 9.281723311s 9.34704125s 9.572445482s 9.611262851s 9.796860935s 9.875074361s 9.875425835s 9.877745961s 10.11513758s 10.354824697s 10.359865494s 10.770724477s 10.95826101s 11.013657452s 11.228514546s 11.25713554s 11.474202241s 11.584171089s 11.59187799s 11.789658142s 12.065056293s 12.155300756s 12.331146794s 12.36837177s 12.509406689s 12.600280418s 12.995469973s 13.016699031s 13.173742666s 13.191670495s 13.218436974s 13.415016386s 13.542109149s 13.579548453s 13.713813723s 14.019386071s 14.214033806s 14.390561147s 14.418552274s 14.421587012s 14.591157444s 14.595335413s]
Aug 26 16:58:28.146: INFO: 50 %ile: 6.064895969s
Aug 26 16:58:28.146: INFO: 90 %ile: 12.331146794s
Aug 26 16:58:28.146: INFO: 99 %ile: 14.591157444s
Aug 26 16:58:28.146: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:58:28.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-1835" for this suite.

• [SLOW TEST:98.573 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":275,"completed":91,"skipped":1517,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:58:28.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service nodeport-service with the type=NodePort in namespace services-4777
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-4777
STEP: creating replication controller externalsvc in namespace services-4777
I0826 16:58:28.686078       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4777, replica count: 2
I0826 16:58:31.736501       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:58:34.736704       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 16:58:37.736978       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Aug 26 16:58:37.927: INFO: Creating new exec pod
Aug 26 16:58:46.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-4777 execpodk4684 -- /bin/sh -x -c nslookup nodeport-service'
Aug 26 16:58:46.854: INFO: stderr: "I0826 16:58:46.757431    2649 log.go:172] (0xc0009ebe40) (0xc0009968c0) Create stream\nI0826 16:58:46.757502    2649 log.go:172] (0xc0009ebe40) (0xc0009968c0) Stream added, broadcasting: 1\nI0826 16:58:46.763554    2649 log.go:172] (0xc0009ebe40) Reply frame received for 1\nI0826 16:58:46.763586    2649 log.go:172] (0xc0009ebe40) (0xc0006bf5e0) Create stream\nI0826 16:58:46.763595    2649 log.go:172] (0xc0009ebe40) (0xc0006bf5e0) Stream added, broadcasting: 3\nI0826 16:58:46.764430    2649 log.go:172] (0xc0009ebe40) Reply frame received for 3\nI0826 16:58:46.764475    2649 log.go:172] (0xc0009ebe40) (0xc0004fea00) Create stream\nI0826 16:58:46.764490    2649 log.go:172] (0xc0009ebe40) (0xc0004fea00) Stream added, broadcasting: 5\nI0826 16:58:46.765378    2649 log.go:172] (0xc0009ebe40) Reply frame received for 5\nI0826 16:58:46.836928    2649 log.go:172] (0xc0009ebe40) Data frame received for 5\nI0826 16:58:46.836955    2649 log.go:172] (0xc0004fea00) (5) Data frame handling\nI0826 16:58:46.836967    2649 log.go:172] (0xc0004fea00) (5) Data frame sent\n+ nslookup nodeport-service\nI0826 16:58:46.844245    2649 log.go:172] (0xc0009ebe40) Data frame received for 3\nI0826 16:58:46.844269    2649 log.go:172] (0xc0006bf5e0) (3) Data frame handling\nI0826 16:58:46.844293    2649 log.go:172] (0xc0006bf5e0) (3) Data frame sent\nI0826 16:58:46.845131    2649 log.go:172] (0xc0009ebe40) Data frame received for 3\nI0826 16:58:46.845151    2649 log.go:172] (0xc0006bf5e0) (3) Data frame handling\nI0826 16:58:46.845165    2649 log.go:172] (0xc0006bf5e0) (3) Data frame sent\nI0826 16:58:46.845582    2649 log.go:172] (0xc0009ebe40) Data frame received for 5\nI0826 16:58:46.845603    2649 log.go:172] (0xc0004fea00) (5) Data frame handling\nI0826 16:58:46.845640    2649 log.go:172] (0xc0009ebe40) Data frame received for 3\nI0826 16:58:46.845675    2649 log.go:172] (0xc0006bf5e0) (3) Data frame handling\nI0826 16:58:46.847323    2649 log.go:172] (0xc0009ebe40) Data frame received for 1\nI0826 16:58:46.847347    2649 log.go:172] (0xc0009968c0) (1) Data frame handling\nI0826 16:58:46.847365    2649 log.go:172] (0xc0009968c0) (1) Data frame sent\nI0826 16:58:46.847390    2649 log.go:172] (0xc0009ebe40) (0xc0009968c0) Stream removed, broadcasting: 1\nI0826 16:58:46.847424    2649 log.go:172] (0xc0009ebe40) Go away received\nI0826 16:58:46.847805    2649 log.go:172] (0xc0009ebe40) (0xc0009968c0) Stream removed, broadcasting: 1\nI0826 16:58:46.847825    2649 log.go:172] (0xc0009ebe40) (0xc0006bf5e0) Stream removed, broadcasting: 3\nI0826 16:58:46.847834    2649 log.go:172] (0xc0009ebe40) (0xc0004fea00) Stream removed, broadcasting: 5\n"
Aug 26 16:58:46.854: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4777.svc.cluster.local\tcanonical name = externalsvc.services-4777.svc.cluster.local.\nName:\texternalsvc.services-4777.svc.cluster.local\nAddress: 10.96.46.67\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-4777, will wait for the garbage collector to delete the pods
Aug 26 16:58:47.031: INFO: Deleting ReplicationController externalsvc took: 47.712334ms
Aug 26 16:58:47.231: INFO: Terminating ReplicationController externalsvc pods took: 200.245518ms
Aug 26 16:59:03.625: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:59:05.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4777" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:38.162 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":92,"skipped":1521,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:59:06.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 16:59:07.679: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:59:09.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8477" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":275,"completed":93,"skipped":1532,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:59:09.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-4jxc
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 16:59:10.180: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4jxc" in namespace "subpath-2423" to be "Succeeded or Failed"
Aug 26 16:59:10.601: INFO: Pod "pod-subpath-test-configmap-4jxc": Phase="Pending", Reason="", readiness=false. Elapsed: 421.629055ms
Aug 26 16:59:12.805: INFO: Pod "pod-subpath-test-configmap-4jxc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.625899323s
Aug 26 16:59:14.922: INFO: Pod "pod-subpath-test-configmap-4jxc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.742491561s
Aug 26 16:59:17.393: INFO: Pod "pod-subpath-test-configmap-4jxc": Phase="Running", Reason="", readiness=true. Elapsed: 7.213730795s
Aug 26 16:59:19.823: INFO: Pod "pod-subpath-test-configmap-4jxc": Phase="Running", Reason="", readiness=true. Elapsed: 9.643859558s
Aug 26 16:59:21.949: INFO: Pod "pod-subpath-test-configmap-4jxc": Phase="Running", Reason="", readiness=true. Elapsed: 11.769117086s
Aug 26 16:59:24.081: INFO: Pod "pod-subpath-test-configmap-4jxc": Phase="Running", Reason="", readiness=true. Elapsed: 13.901513523s
Aug 26 16:59:26.123: INFO: Pod "pod-subpath-test-configmap-4jxc": Phase="Running", Reason="", readiness=true. Elapsed: 15.942953947s
Aug 26 16:59:28.136: INFO: Pod "pod-subpath-test-configmap-4jxc": Phase="Running", Reason="", readiness=true. Elapsed: 17.956080805s
Aug 26 16:59:30.170: INFO: Pod "pod-subpath-test-configmap-4jxc": Phase="Running", Reason="", readiness=true. Elapsed: 19.990572947s
Aug 26 16:59:32.326: INFO: Pod "pod-subpath-test-configmap-4jxc": Phase="Running", Reason="", readiness=true. Elapsed: 22.146586167s
Aug 26 16:59:35.146: INFO: Pod "pod-subpath-test-configmap-4jxc": Phase="Running", Reason="", readiness=true. Elapsed: 24.966791627s
Aug 26 16:59:37.244: INFO: Pod "pod-subpath-test-configmap-4jxc": Phase="Running", Reason="", readiness=true. Elapsed: 27.064802443s
Aug 26 16:59:39.334: INFO: Pod "pod-subpath-test-configmap-4jxc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.15453229s
STEP: Saw pod success
Aug 26 16:59:39.334: INFO: Pod "pod-subpath-test-configmap-4jxc" satisfied condition "Succeeded or Failed"
Aug 26 16:59:39.410: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-4jxc container test-container-subpath-configmap-4jxc: 
STEP: delete the pod
Aug 26 16:59:41.453: INFO: Waiting for pod pod-subpath-test-configmap-4jxc to disappear
Aug 26 16:59:41.524: INFO: Pod pod-subpath-test-configmap-4jxc no longer exists
STEP: Deleting pod pod-subpath-test-configmap-4jxc
Aug 26 16:59:41.524: INFO: Deleting pod "pod-subpath-test-configmap-4jxc" in namespace "subpath-2423"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 16:59:42.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2423" for this suite.

• [SLOW TEST:34.448 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":94,"skipped":1538,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 16:59:44.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:00:32.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1806" for this suite.

• [SLOW TEST:48.583 seconds]
[sig-apps] Job
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":95,"skipped":1594,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:00:32.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 26 17:00:35.536: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:35.562: INFO: Number of nodes with available pods: 0
Aug 26 17:00:35.563: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:00:36.798: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:37.555: INFO: Number of nodes with available pods: 0
Aug 26 17:00:37.555: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:00:37.977: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:38.562: INFO: Number of nodes with available pods: 0
Aug 26 17:00:38.562: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:00:39.245: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:39.459: INFO: Number of nodes with available pods: 0
Aug 26 17:00:39.459: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:00:40.249: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:40.253: INFO: Number of nodes with available pods: 0
Aug 26 17:00:40.253: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:00:40.988: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:41.499: INFO: Number of nodes with available pods: 0
Aug 26 17:00:41.499: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:00:41.767: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:42.920: INFO: Number of nodes with available pods: 0
Aug 26 17:00:42.920: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:00:44.241: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:44.927: INFO: Number of nodes with available pods: 0
Aug 26 17:00:44.927: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:00:45.658: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:46.077: INFO: Number of nodes with available pods: 0
Aug 26 17:00:46.077: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:00:46.978: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:47.214: INFO: Number of nodes with available pods: 0
Aug 26 17:00:47.214: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:00:47.757: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:47.887: INFO: Number of nodes with available pods: 0
Aug 26 17:00:47.887: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:00:48.628: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:48.915: INFO: Number of nodes with available pods: 0
Aug 26 17:00:48.915: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:00:49.809: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:50.096: INFO: Number of nodes with available pods: 1
Aug 26 17:00:50.096: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:00:51.204: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:51.417: INFO: Number of nodes with available pods: 1
Aug 26 17:00:51.417: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:00:51.662: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:51.701: INFO: Number of nodes with available pods: 1
Aug 26 17:00:51.701: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:00:52.654: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:53.066: INFO: Number of nodes with available pods: 2
Aug 26 17:00:53.066: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 26 17:00:53.283: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:00:53.286: INFO: Number of nodes with available pods: 2
Aug 26 17:00:53.286: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-493, will wait for the garbage collector to delete the pods
Aug 26 17:00:54.215: INFO: Deleting DaemonSet.extensions daemon-set took: 186.337239ms
Aug 26 17:00:54.915: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.238538ms
Aug 26 17:00:59.744: INFO: Number of nodes with available pods: 0
Aug 26 17:00:59.744: INFO: Number of running nodes: 0, number of available pods: 0
Aug 26 17:00:59.747: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-493/daemonsets","resourceVersion":"1103401"},"items":null}

Aug 26 17:00:59.829: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-493/pods","resourceVersion":"1103403"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:00:59.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-493" for this suite.

• [SLOW TEST:27.385 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":96,"skipped":1609,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:01:00.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:01:12.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-917" for this suite.

• [SLOW TEST:13.093 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":97,"skipped":1611,"failed":0}
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:01:13.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-81a24df1-3d70-4917-b31c-3282dd4ad509
STEP: Creating a pod to test consume secrets
Aug 26 17:01:13.565: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-26125eb5-0e6d-431d-87b4-37c774596862" in namespace "projected-7251" to be "Succeeded or Failed"
Aug 26 17:01:13.608: INFO: Pod "pod-projected-secrets-26125eb5-0e6d-431d-87b4-37c774596862": Phase="Pending", Reason="", readiness=false. Elapsed: 42.98709ms
Aug 26 17:01:16.149: INFO: Pod "pod-projected-secrets-26125eb5-0e6d-431d-87b4-37c774596862": Phase="Pending", Reason="", readiness=false. Elapsed: 2.583937756s
Aug 26 17:01:18.591: INFO: Pod "pod-projected-secrets-26125eb5-0e6d-431d-87b4-37c774596862": Phase="Pending", Reason="", readiness=false. Elapsed: 5.025590949s
Aug 26 17:01:20.908: INFO: Pod "pod-projected-secrets-26125eb5-0e6d-431d-87b4-37c774596862": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.343185053s
STEP: Saw pod success
Aug 26 17:01:20.908: INFO: Pod "pod-projected-secrets-26125eb5-0e6d-431d-87b4-37c774596862" satisfied condition "Succeeded or Failed"
Aug 26 17:01:21.237: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-26125eb5-0e6d-431d-87b4-37c774596862 container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 17:01:21.505: INFO: Waiting for pod pod-projected-secrets-26125eb5-0e6d-431d-87b4-37c774596862 to disappear
Aug 26 17:01:21.553: INFO: Pod pod-projected-secrets-26125eb5-0e6d-431d-87b4-37c774596862 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:01:21.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7251" for this suite.

• [SLOW TEST:8.483 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":98,"skipped":1611,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:01:21.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 26 17:01:22.607: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0939221-0f9d-41ad-bf82-8cc7a2d9a91e" in namespace "downward-api-3981" to be "Succeeded or Failed"
Aug 26 17:01:22.778: INFO: Pod "downwardapi-volume-e0939221-0f9d-41ad-bf82-8cc7a2d9a91e": Phase="Pending", Reason="", readiness=false. Elapsed: 171.047718ms
Aug 26 17:01:24.817: INFO: Pod "downwardapi-volume-e0939221-0f9d-41ad-bf82-8cc7a2d9a91e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20997004s
Aug 26 17:01:28.591: INFO: Pod "downwardapi-volume-e0939221-0f9d-41ad-bf82-8cc7a2d9a91e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.983734513s
Aug 26 17:01:31.409: INFO: Pod "downwardapi-volume-e0939221-0f9d-41ad-bf82-8cc7a2d9a91e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.801423249s
Aug 26 17:01:33.529: INFO: Pod "downwardapi-volume-e0939221-0f9d-41ad-bf82-8cc7a2d9a91e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.921366123s
Aug 26 17:01:36.141: INFO: Pod "downwardapi-volume-e0939221-0f9d-41ad-bf82-8cc7a2d9a91e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.533215563s
STEP: Saw pod success
Aug 26 17:01:36.141: INFO: Pod "downwardapi-volume-e0939221-0f9d-41ad-bf82-8cc7a2d9a91e" satisfied condition "Succeeded or Failed"
Aug 26 17:01:37.241: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-e0939221-0f9d-41ad-bf82-8cc7a2d9a91e container client-container: 
STEP: delete the pod
Aug 26 17:01:40.174: INFO: Waiting for pod downwardapi-volume-e0939221-0f9d-41ad-bf82-8cc7a2d9a91e to disappear
Aug 26 17:01:40.405: INFO: Pod downwardapi-volume-e0939221-0f9d-41ad-bf82-8cc7a2d9a91e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:01:40.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3981" for this suite.

• [SLOW TEST:19.088 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1635,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:01:40.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 26 17:01:45.585: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4100 /api/v1/namespaces/watch-4100/configmaps/e2e-watch-test-watch-closed 129528a6-41c9-4f38-aee9-cb99e1ae0243 1103953 0 2020-08-26 17:01:44 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-08-26 17:01:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 26 17:01:45.586: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4100 /api/v1/namespaces/watch-4100/configmaps/e2e-watch-test-watch-closed 129528a6-41c9-4f38-aee9-cb99e1ae0243 1103958 0 2020-08-26 17:01:44 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-08-26 17:01:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 26 17:01:46.653: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4100 /api/v1/namespaces/watch-4100/configmaps/e2e-watch-test-watch-closed 129528a6-41c9-4f38-aee9-cb99e1ae0243 1103961 0 2020-08-26 17:01:44 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-08-26 17:01:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 26 17:01:46.653: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4100 /api/v1/namespaces/watch-4100/configmaps/e2e-watch-test-watch-closed 129528a6-41c9-4f38-aee9-cb99e1ae0243 1103962 0 2020-08-26 17:01:44 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-08-26 17:01:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:01:46.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4100" for this suite.

• [SLOW TEST:5.960 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":100,"skipped":1689,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:01:46.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
Aug 26 17:01:50.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4195'
Aug 26 17:01:52.181: INFO: stderr: ""
Aug 26 17:01:52.181: INFO: stdout: "pod/pause created\n"
Aug 26 17:01:52.181: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 26 17:01:52.182: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4195" to be "running and ready"
Aug 26 17:01:52.527: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 345.293943ms
Aug 26 17:01:54.629: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.447553972s
Aug 26 17:01:56.824: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.64244886s
Aug 26 17:01:59.130: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.947867084s
Aug 26 17:02:01.133: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.951293056s
Aug 26 17:02:03.349: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 11.166950421s
Aug 26 17:02:05.573: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 13.391806153s
Aug 26 17:02:05.574: INFO: Pod "pause" satisfied condition "running and ready"
Aug 26 17:02:05.574: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 26 17:02:05.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4195'
Aug 26 17:02:05.856: INFO: stderr: ""
Aug 26 17:02:05.856: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 26 17:02:05.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4195'
Aug 26 17:02:05.954: INFO: stderr: ""
Aug 26 17:02:05.954: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          13s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 26 17:02:05.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4195'
Aug 26 17:02:06.141: INFO: stderr: ""
Aug 26 17:02:06.141: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 26 17:02:06.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4195'
Aug 26 17:02:06.433: INFO: stderr: ""
Aug 26 17:02:06.433: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          14s   \n"
[AfterEach] Kubectl label
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
Aug 26 17:02:06.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4195'
Aug 26 17:02:08.569: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 17:02:08.569: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 26 17:02:08.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4195'
Aug 26 17:02:09.948: INFO: stderr: "No resources found in kubectl-4195 namespace.\n"
Aug 26 17:02:09.948: INFO: stdout: ""
Aug 26 17:02:09.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4195 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 26 17:02:10.192: INFO: stderr: ""
Aug 26 17:02:10.192: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:02:10.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4195" for this suite.

• [SLOW TEST:23.466 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":101,"skipped":1717,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:02:10.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:02:11.285: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config version'
Aug 26 17:02:12.559: INFO: stderr: ""
Aug 26 17:02:12.559: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.8\", GitCommit:\"9f2892aab98fe339f3bd70e3c470144299398ace\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T16:12:48Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.8\", GitCommit:\"9f2892aab98fe339f3bd70e3c470144299398ace\", GitTreeState:\"clean\", BuildDate:\"2020-08-14T21:13:38Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:02:12.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6341" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":275,"completed":102,"skipped":1727,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:02:13.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 26 17:02:14.506: INFO: Waiting up to 5m0s for pod "downwardapi-volume-522d61ad-1539-43ef-a130-a89f182bbd67" in namespace "downward-api-3352" to be "Succeeded or Failed"
Aug 26 17:02:15.088: INFO: Pod "downwardapi-volume-522d61ad-1539-43ef-a130-a89f182bbd67": Phase="Pending", Reason="", readiness=false. Elapsed: 582.352643ms
Aug 26 17:02:17.348: INFO: Pod "downwardapi-volume-522d61ad-1539-43ef-a130-a89f182bbd67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.842079306s
Aug 26 17:02:19.606: INFO: Pod "downwardapi-volume-522d61ad-1539-43ef-a130-a89f182bbd67": Phase="Pending", Reason="", readiness=false. Elapsed: 5.100004889s
Aug 26 17:02:22.510: INFO: Pod "downwardapi-volume-522d61ad-1539-43ef-a130-a89f182bbd67": Phase="Pending", Reason="", readiness=false. Elapsed: 8.004332358s
Aug 26 17:02:24.839: INFO: Pod "downwardapi-volume-522d61ad-1539-43ef-a130-a89f182bbd67": Phase="Pending", Reason="", readiness=false. Elapsed: 10.333603302s
Aug 26 17:02:27.416: INFO: Pod "downwardapi-volume-522d61ad-1539-43ef-a130-a89f182bbd67": Phase="Pending", Reason="", readiness=false. Elapsed: 12.910386506s
Aug 26 17:02:29.420: INFO: Pod "downwardapi-volume-522d61ad-1539-43ef-a130-a89f182bbd67": Phase="Running", Reason="", readiness=true. Elapsed: 14.914228604s
Aug 26 17:02:31.686: INFO: Pod "downwardapi-volume-522d61ad-1539-43ef-a130-a89f182bbd67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.180336421s
STEP: Saw pod success
Aug 26 17:02:31.686: INFO: Pod "downwardapi-volume-522d61ad-1539-43ef-a130-a89f182bbd67" satisfied condition "Succeeded or Failed"
Aug 26 17:02:31.688: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-522d61ad-1539-43ef-a130-a89f182bbd67 container client-container: 
STEP: delete the pod
Aug 26 17:02:32.219: INFO: Waiting for pod downwardapi-volume-522d61ad-1539-43ef-a130-a89f182bbd67 to disappear
Aug 26 17:02:32.387: INFO: Pod downwardapi-volume-522d61ad-1539-43ef-a130-a89f182bbd67 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:02:32.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3352" for this suite.

• [SLOW TEST:20.575 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1733,"failed":0}
SSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:02:33.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:02:34.631: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-f75b1603-94b0-457d-9b5e-65d3a57bf6f6" in namespace "security-context-test-4845" to be "Succeeded or Failed"
Aug 26 17:02:35.049: INFO: Pod "busybox-readonly-false-f75b1603-94b0-457d-9b5e-65d3a57bf6f6": Phase="Pending", Reason="", readiness=false. Elapsed: 418.121686ms
Aug 26 17:02:37.142: INFO: Pod "busybox-readonly-false-f75b1603-94b0-457d-9b5e-65d3a57bf6f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.51173121s
Aug 26 17:02:39.523: INFO: Pod "busybox-readonly-false-f75b1603-94b0-457d-9b5e-65d3a57bf6f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.891918212s
Aug 26 17:02:41.773: INFO: Pod "busybox-readonly-false-f75b1603-94b0-457d-9b5e-65d3a57bf6f6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.142485906s
Aug 26 17:02:44.220: INFO: Pod "busybox-readonly-false-f75b1603-94b0-457d-9b5e-65d3a57bf6f6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.589181991s
Aug 26 17:02:46.492: INFO: Pod "busybox-readonly-false-f75b1603-94b0-457d-9b5e-65d3a57bf6f6": Phase="Running", Reason="", readiness=true. Elapsed: 11.861658246s
Aug 26 17:02:48.932: INFO: Pod "busybox-readonly-false-f75b1603-94b0-457d-9b5e-65d3a57bf6f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.301361714s
Aug 26 17:02:48.932: INFO: Pod "busybox-readonly-false-f75b1603-94b0-457d-9b5e-65d3a57bf6f6" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:02:48.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4845" for this suite.

• [SLOW TEST:15.599 seconds]
[k8s.io] Security Context
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with readOnlyRootFilesystem
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1738,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Lease
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:02:49.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Lease
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:02:51.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-4216" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":105,"skipped":1779,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:02:51.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-dabd7fc7-98e4-41cf-bf2e-371b18d761fa in namespace container-probe-1060
Aug 26 17:02:57.939: INFO: Started pod liveness-dabd7fc7-98e4-41cf-bf2e-371b18d761fa in namespace container-probe-1060
STEP: checking the pod's current state and verifying that restartCount is present
Aug 26 17:02:57.942: INFO: Initial restart count of pod liveness-dabd7fc7-98e4-41cf-bf2e-371b18d761fa is 0
Aug 26 17:03:18.259: INFO: Restart count of pod container-probe-1060/liveness-dabd7fc7-98e4-41cf-bf2e-371b18d761fa is now 1 (20.317759139s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:03:18.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1060" for this suite.

• [SLOW TEST:27.721 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":106,"skipped":1801,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:03:19.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 26 17:03:21.366: INFO: Waiting up to 5m0s for pod "pod-7305f09b-062a-425e-b7c7-bd2db712025a" in namespace "emptydir-2853" to be "Succeeded or Failed"
Aug 26 17:03:21.998: INFO: Pod "pod-7305f09b-062a-425e-b7c7-bd2db712025a": Phase="Pending", Reason="", readiness=false. Elapsed: 631.304232ms
Aug 26 17:03:24.001: INFO: Pod "pod-7305f09b-062a-425e-b7c7-bd2db712025a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.634948115s
Aug 26 17:03:26.591: INFO: Pod "pod-7305f09b-062a-425e-b7c7-bd2db712025a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.224869209s
Aug 26 17:03:28.595: INFO: Pod "pod-7305f09b-062a-425e-b7c7-bd2db712025a": Phase="Running", Reason="", readiness=true. Elapsed: 7.228306864s
Aug 26 17:03:30.598: INFO: Pod "pod-7305f09b-062a-425e-b7c7-bd2db712025a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.231920025s
STEP: Saw pod success
Aug 26 17:03:30.598: INFO: Pod "pod-7305f09b-062a-425e-b7c7-bd2db712025a" satisfied condition "Succeeded or Failed"
Aug 26 17:03:30.601: INFO: Trying to get logs from node kali-worker pod pod-7305f09b-062a-425e-b7c7-bd2db712025a container test-container: 
STEP: delete the pod
Aug 26 17:03:31.251: INFO: Waiting for pod pod-7305f09b-062a-425e-b7c7-bd2db712025a to disappear
Aug 26 17:03:31.554: INFO: Pod pod-7305f09b-062a-425e-b7c7-bd2db712025a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:03:31.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2853" for this suite.

• [SLOW TEST:12.421 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1802,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:03:31.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Aug 26 17:03:33.189: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5435'
Aug 26 17:03:34.988: INFO: stderr: ""
Aug 26 17:03:34.988: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 26 17:03:34.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5435'
Aug 26 17:03:36.097: INFO: stderr: ""
Aug 26 17:03:36.097: INFO: stdout: "update-demo-nautilus-959w5 "
STEP: Replicas for name=update-demo: expected=2 actual=1
Aug 26 17:03:41.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5435'
Aug 26 17:03:41.257: INFO: stderr: ""
Aug 26 17:03:41.257: INFO: stdout: "update-demo-nautilus-959w5 update-demo-nautilus-dgdmd "
Aug 26 17:03:41.257: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-959w5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5435'
Aug 26 17:03:42.236: INFO: stderr: ""
Aug 26 17:03:42.236: INFO: stdout: ""
Aug 26 17:03:42.236: INFO: update-demo-nautilus-959w5 is created but not running
Aug 26 17:03:47.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5435'
Aug 26 17:03:47.646: INFO: stderr: ""
Aug 26 17:03:47.646: INFO: stdout: "update-demo-nautilus-959w5 update-demo-nautilus-dgdmd "
Aug 26 17:03:47.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-959w5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5435'
Aug 26 17:03:48.031: INFO: stderr: ""
Aug 26 17:03:48.031: INFO: stdout: ""
Aug 26 17:03:48.031: INFO: update-demo-nautilus-959w5 is created but not running
Aug 26 17:03:53.031: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5435'
Aug 26 17:03:53.133: INFO: stderr: ""
Aug 26 17:03:53.133: INFO: stdout: "update-demo-nautilus-959w5 update-demo-nautilus-dgdmd "
Aug 26 17:03:53.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-959w5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5435'
Aug 26 17:03:53.238: INFO: stderr: ""
Aug 26 17:03:53.238: INFO: stdout: "true"
Aug 26 17:03:53.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-959w5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5435'
Aug 26 17:03:53.336: INFO: stderr: ""
Aug 26 17:03:53.336: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 17:03:53.336: INFO: validating pod update-demo-nautilus-959w5
Aug 26 17:03:53.340: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 17:03:53.340: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 17:03:53.340: INFO: update-demo-nautilus-959w5 is verified up and running
Aug 26 17:03:53.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dgdmd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5435'
Aug 26 17:03:53.432: INFO: stderr: ""
Aug 26 17:03:53.432: INFO: stdout: "true"
Aug 26 17:03:53.432: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dgdmd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5435'
Aug 26 17:03:53.535: INFO: stderr: ""
Aug 26 17:03:53.535: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 26 17:03:53.535: INFO: validating pod update-demo-nautilus-dgdmd
Aug 26 17:03:53.550: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 26 17:03:53.550: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 26 17:03:53.550: INFO: update-demo-nautilus-dgdmd is verified up and running
STEP: using delete to clean up resources
Aug 26 17:03:53.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5435'
Aug 26 17:03:53.651: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 17:03:53.651: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 26 17:03:53.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5435'
Aug 26 17:03:53.749: INFO: stderr: "No resources found in kubectl-5435 namespace.\n"
Aug 26 17:03:53.749: INFO: stdout: ""
Aug 26 17:03:53.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5435 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 26 17:03:53.849: INFO: stderr: ""
Aug 26 17:03:53.849: INFO: stdout: "update-demo-nautilus-959w5\nupdate-demo-nautilus-dgdmd\n"
Aug 26 17:03:54.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5435'
Aug 26 17:03:54.458: INFO: stderr: "No resources found in kubectl-5435 namespace.\n"
Aug 26 17:03:54.458: INFO: stdout: ""
Aug 26 17:03:54.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5435 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 26 17:03:54.566: INFO: stderr: ""
Aug 26 17:03:54.566: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:03:54.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5435" for this suite.

• [SLOW TEST:22.948 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":275,"completed":108,"skipped":1804,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:03:54.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:03:55.296: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:03:56.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8511" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":275,"completed":109,"skipped":1815,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:03:56.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:03:56.750: INFO: The status of Pod test-webserver-da310d68-79cb-4f47-8763-448dec5d8d9b is Pending, waiting for it to be Running (with Ready = true)
Aug 26 17:03:58.754: INFO: The status of Pod test-webserver-da310d68-79cb-4f47-8763-448dec5d8d9b is Pending, waiting for it to be Running (with Ready = true)
Aug 26 17:04:00.807: INFO: The status of Pod test-webserver-da310d68-79cb-4f47-8763-448dec5d8d9b is Pending, waiting for it to be Running (with Ready = true)
Aug 26 17:04:03.116: INFO: The status of Pod test-webserver-da310d68-79cb-4f47-8763-448dec5d8d9b is Running (Ready = false)
Aug 26 17:04:05.254: INFO: The status of Pod test-webserver-da310d68-79cb-4f47-8763-448dec5d8d9b is Running (Ready = false)
Aug 26 17:04:06.754: INFO: The status of Pod test-webserver-da310d68-79cb-4f47-8763-448dec5d8d9b is Running (Ready = false)
Aug 26 17:04:09.592: INFO: The status of Pod test-webserver-da310d68-79cb-4f47-8763-448dec5d8d9b is Running (Ready = false)
Aug 26 17:04:11.203: INFO: The status of Pod test-webserver-da310d68-79cb-4f47-8763-448dec5d8d9b is Running (Ready = false)
Aug 26 17:04:12.768: INFO: The status of Pod test-webserver-da310d68-79cb-4f47-8763-448dec5d8d9b is Running (Ready = false)
Aug 26 17:04:14.754: INFO: The status of Pod test-webserver-da310d68-79cb-4f47-8763-448dec5d8d9b is Running (Ready = false)
Aug 26 17:04:16.755: INFO: The status of Pod test-webserver-da310d68-79cb-4f47-8763-448dec5d8d9b is Running (Ready = false)
Aug 26 17:04:18.754: INFO: The status of Pod test-webserver-da310d68-79cb-4f47-8763-448dec5d8d9b is Running (Ready = false)
Aug 26 17:04:20.966: INFO: The status of Pod test-webserver-da310d68-79cb-4f47-8763-448dec5d8d9b is Running (Ready = false)
Aug 26 17:04:22.776: INFO: The status of Pod test-webserver-da310d68-79cb-4f47-8763-448dec5d8d9b is Running (Ready = false)
Aug 26 17:04:25.198: INFO: The status of Pod test-webserver-da310d68-79cb-4f47-8763-448dec5d8d9b is Running (Ready = true)
Aug 26 17:04:25.201: INFO: Container started at 2020-08-26 17:04:00 +0000 UTC, pod became ready at 2020-08-26 17:04:24 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:04:25.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4119" for this suite.

• [SLOW TEST:28.629 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":1890,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:04:25.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 26 17:04:26.086: INFO: Waiting up to 5m0s for pod "downward-api-00f8d708-4069-4079-bee4-421e3596424f" in namespace "downward-api-643" to be "Succeeded or Failed"
Aug 26 17:04:26.761: INFO: Pod "downward-api-00f8d708-4069-4079-bee4-421e3596424f": Phase="Pending", Reason="", readiness=false. Elapsed: 674.812178ms
Aug 26 17:04:29.218: INFO: Pod "downward-api-00f8d708-4069-4079-bee4-421e3596424f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.131517956s
Aug 26 17:04:31.441: INFO: Pod "downward-api-00f8d708-4069-4079-bee4-421e3596424f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.35535819s
Aug 26 17:04:33.446: INFO: Pod "downward-api-00f8d708-4069-4079-bee4-421e3596424f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.359553472s
Aug 26 17:04:35.519: INFO: Pod "downward-api-00f8d708-4069-4079-bee4-421e3596424f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.432960483s
Aug 26 17:04:37.729: INFO: Pod "downward-api-00f8d708-4069-4079-bee4-421e3596424f": Phase="Running", Reason="", readiness=true. Elapsed: 11.643001623s
Aug 26 17:04:39.733: INFO: Pod "downward-api-00f8d708-4069-4079-bee4-421e3596424f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.647308769s
STEP: Saw pod success
Aug 26 17:04:39.733: INFO: Pod "downward-api-00f8d708-4069-4079-bee4-421e3596424f" satisfied condition "Succeeded or Failed"
Aug 26 17:04:39.737: INFO: Trying to get logs from node kali-worker pod downward-api-00f8d708-4069-4079-bee4-421e3596424f container dapi-container: 
STEP: delete the pod
Aug 26 17:04:40.638: INFO: Waiting for pod downward-api-00f8d708-4069-4079-bee4-421e3596424f to disappear
Aug 26 17:04:41.376: INFO: Pod downward-api-00f8d708-4069-4079-bee4-421e3596424f no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:04:41.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-643" for this suite.

• [SLOW TEST:16.887 seconds]
[sig-node] Downward API
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":1900,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:04:42.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:04:50.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7408" for this suite.

• [SLOW TEST:8.595 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a read only busybox container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1914,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:04:50.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:05:05.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2179" for this suite.

• [SLOW TEST:14.886 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":113,"skipped":1915,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:05:05.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:05:07.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 26 17:05:10.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1633 create -f -'
Aug 26 17:05:25.475: INFO: stderr: ""
Aug 26 17:05:25.475: INFO: stdout: "e2e-test-crd-publish-openapi-6315-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 26 17:05:25.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1633 delete e2e-test-crd-publish-openapi-6315-crds test-cr'
Aug 26 17:05:25.648: INFO: stderr: ""
Aug 26 17:05:25.648: INFO: stdout: "e2e-test-crd-publish-openapi-6315-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Aug 26 17:05:25.648: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1633 apply -f -'
Aug 26 17:05:27.005: INFO: stderr: ""
Aug 26 17:05:27.005: INFO: stdout: "e2e-test-crd-publish-openapi-6315-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Aug 26 17:05:27.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1633 delete e2e-test-crd-publish-openapi-6315-crds test-cr'
Aug 26 17:05:27.266: INFO: stderr: ""
Aug 26 17:05:27.266: INFO: stdout: "e2e-test-crd-publish-openapi-6315-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 26 17:05:27.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6315-crds'
Aug 26 17:05:27.898: INFO: stderr: ""
Aug 26 17:05:27.898: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6315-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:05:33.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1633" for this suite.

• [SLOW TEST:27.739 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":114,"skipped":1943,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:05:33.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-f4f5abd1-8f95-41b3-a8eb-31e87a655547
STEP: Creating a pod to test consume configMaps
Aug 26 17:05:35.499: INFO: Waiting up to 5m0s for pod "pod-configmaps-310aee2e-e566-4f43-a9e2-7c9348b4525f" in namespace "configmap-8962" to be "Succeeded or Failed"
Aug 26 17:05:36.117: INFO: Pod "pod-configmaps-310aee2e-e566-4f43-a9e2-7c9348b4525f": Phase="Pending", Reason="", readiness=false. Elapsed: 617.670998ms
Aug 26 17:05:38.651: INFO: Pod "pod-configmaps-310aee2e-e566-4f43-a9e2-7c9348b4525f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.151030014s
Aug 26 17:05:41.130: INFO: Pod "pod-configmaps-310aee2e-e566-4f43-a9e2-7c9348b4525f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.630923529s
Aug 26 17:05:43.134: INFO: Pod "pod-configmaps-310aee2e-e566-4f43-a9e2-7c9348b4525f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.634932873s
Aug 26 17:05:45.333: INFO: Pod "pod-configmaps-310aee2e-e566-4f43-a9e2-7c9348b4525f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.833212984s
Aug 26 17:05:47.765: INFO: Pod "pod-configmaps-310aee2e-e566-4f43-a9e2-7c9348b4525f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.265157337s
STEP: Saw pod success
Aug 26 17:05:47.765: INFO: Pod "pod-configmaps-310aee2e-e566-4f43-a9e2-7c9348b4525f" satisfied condition "Succeeded or Failed"
Aug 26 17:05:47.842: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-310aee2e-e566-4f43-a9e2-7c9348b4525f container configmap-volume-test: 
STEP: delete the pod
Aug 26 17:05:48.717: INFO: Waiting for pod pod-configmaps-310aee2e-e566-4f43-a9e2-7c9348b4525f to disappear
Aug 26 17:05:49.859: INFO: Pod pod-configmaps-310aee2e-e566-4f43-a9e2-7c9348b4525f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:05:49.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8962" for this suite.

• [SLOW TEST:16.668 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":115,"skipped":1977,"failed":0}
S
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:05:50.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:05:50.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-7668" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":116,"skipped":1978,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:05:50.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 26 17:05:51.990: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 26 17:05:52.474: INFO: Waiting for terminating namespaces to be deleted...
Aug 26 17:05:52.823: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 26 17:05:52.831: INFO: kindnet-f7bnz from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 26 17:05:52.831: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 17:05:52.831: INFO: kube-proxy-hhbw6 from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 26 17:05:52.831: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 17:05:52.831: INFO: daemon-set-rsfwc from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 26 17:05:52.831: INFO: 	Container app ready: true, restart count 0
Aug 26 17:05:52.831: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 26 17:05:53.272: INFO: kindnet-4v6sn from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 26 17:05:53.273: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 17:05:53.273: INFO: kube-proxy-m77qg from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 26 17:05:53.273: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 17:05:53.273: INFO: daemon-set-69cql from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 26 17:05:53.273: INFO: 	Container app ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: verifying the node has the label node kali-worker
STEP: verifying the node has the label node kali-worker2
Aug 26 17:05:53.988: INFO: Pod daemon-set-69cql requesting resource cpu=0m on Node kali-worker2
Aug 26 17:05:53.988: INFO: Pod daemon-set-rsfwc requesting resource cpu=0m on Node kali-worker
Aug 26 17:05:53.988: INFO: Pod kindnet-4v6sn requesting resource cpu=100m on Node kali-worker2
Aug 26 17:05:53.988: INFO: Pod kindnet-f7bnz requesting resource cpu=100m on Node kali-worker
Aug 26 17:05:53.988: INFO: Pod kube-proxy-hhbw6 requesting resource cpu=0m on Node kali-worker
Aug 26 17:05:53.988: INFO: Pod kube-proxy-m77qg requesting resource cpu=0m on Node kali-worker2
STEP: Starting Pods to consume most of the cluster CPU.
Aug 26 17:05:53.988: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker
Aug 26 17:05:53.996: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-274c0718-7c23-42df-aaed-ff81ced3df2c.162ee0508f03bed2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1880/filler-pod-274c0718-7c23-42df-aaed-ff81ced3df2c to kali-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-274c0718-7c23-42df-aaed-ff81ced3df2c.162ee051ba15dfba], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-274c0718-7c23-42df-aaed-ff81ced3df2c.162ee0523aaa87ba], Reason = [Created], Message = [Created container filler-pod-274c0718-7c23-42df-aaed-ff81ced3df2c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-274c0718-7c23-42df-aaed-ff81ced3df2c.162ee0524e5e5bb3], Reason = [Started], Message = [Started container filler-pod-274c0718-7c23-42df-aaed-ff81ced3df2c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a2271f0f-b933-4712-ab02-1ca3f8bf98e1.162ee050899b3686], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1880/filler-pod-a2271f0f-b933-4712-ab02-1ca3f8bf98e1 to kali-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a2271f0f-b933-4712-ab02-1ca3f8bf98e1.162ee051654bad52], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a2271f0f-b933-4712-ab02-1ca3f8bf98e1.162ee0520102a5d9], Reason = [Created], Message = [Created container filler-pod-a2271f0f-b933-4712-ab02-1ca3f8bf98e1]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a2271f0f-b933-4712-ab02-1ca3f8bf98e1.162ee05230f552b1], Reason = [Started], Message = [Started container filler-pod-a2271f0f-b933-4712-ab02-1ca3f8bf98e1]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162ee052ef8769bf], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node kali-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node kali-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:06:06.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1880" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:15.694 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":275,"completed":117,"skipped":2011,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:06:06.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
Aug 26 17:06:07.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-6560 -- logs-generator --log-lines-total 100 --run-duration 20s'
Aug 26 17:06:07.441: INFO: stderr: ""
Aug 26 17:06:07.441: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
Aug 26 17:06:07.442: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Aug 26 17:06:07.442: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6560" to be "running and ready, or succeeded"
Aug 26 17:06:07.548: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 106.58579ms
Aug 26 17:06:09.552: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11027421s
Aug 26 17:06:11.556: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114603059s
Aug 26 17:06:13.837: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.3956892s
Aug 26 17:06:15.939: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.497545097s
Aug 26 17:06:15.939: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Aug 26 17:06:15.939: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Aug 26 17:06:15.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6560'
Aug 26 17:06:16.866: INFO: stderr: ""
Aug 26 17:06:16.866: INFO: stdout: "I0826 17:06:13.993266       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/z87 278\nI0826 17:06:14.193391       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/rtl 279\nI0826 17:06:14.393436       1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/htn8 294\nI0826 17:06:14.593451       1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/z9q 430\nI0826 17:06:14.793475       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/hkh 274\nI0826 17:06:14.993432       1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/f8z 542\nI0826 17:06:15.193469       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/4xkb 574\nI0826 17:06:15.393420       1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/pfvm 486\nI0826 17:06:15.593442       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/mhk 500\nI0826 17:06:15.793438       1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/ktx 369\nI0826 17:06:15.993402       1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/kqg 442\nI0826 17:06:16.195613       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/jzf 535\nI0826 17:06:16.393526       1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/jnd 262\nI0826 17:06:16.593560       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/tnw 306\nI0826 17:06:16.793403       1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/hscn 513\n"
STEP: limiting log lines
Aug 26 17:06:16.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6560 --tail=1'
Aug 26 17:06:17.506: INFO: stderr: ""
Aug 26 17:06:17.506: INFO: stdout: "I0826 17:06:17.393436       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/pff 515\n"
Aug 26 17:06:17.506: INFO: got output "I0826 17:06:17.393436       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/pff 515\n"
STEP: limiting log bytes
Aug 26 17:06:17.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6560 --limit-bytes=1'
Aug 26 17:06:18.258: INFO: stderr: ""
Aug 26 17:06:18.259: INFO: stdout: "I"
Aug 26 17:06:18.259: INFO: got output "I"
STEP: exposing timestamps
Aug 26 17:06:18.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6560 --tail=1 --timestamps'
Aug 26 17:06:18.749: INFO: stderr: ""
Aug 26 17:06:18.749: INFO: stdout: "2020-08-26T17:06:18.593675947Z I0826 17:06:18.593442       1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/7lw7 540\n"
Aug 26 17:06:18.749: INFO: got output "2020-08-26T17:06:18.593675947Z I0826 17:06:18.593442       1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/7lw7 540\n"
STEP: restricting to a time range
Aug 26 17:06:21.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6560 --since=1s'
Aug 26 17:06:21.476: INFO: stderr: ""
Aug 26 17:06:21.476: INFO: stdout: "I0826 17:06:20.593477       1 logs_generator.go:76] 33 PUT /api/v1/namespaces/kube-system/pods/pzm 549\nI0826 17:06:20.793455       1 logs_generator.go:76] 34 PUT /api/v1/namespaces/kube-system/pods/64m4 424\nI0826 17:06:20.993435       1 logs_generator.go:76] 35 PUT /api/v1/namespaces/default/pods/rqk 533\nI0826 17:06:21.193414       1 logs_generator.go:76] 36 PUT /api/v1/namespaces/kube-system/pods/cjdv 230\nI0826 17:06:21.393391       1 logs_generator.go:76] 37 GET /api/v1/namespaces/kube-system/pods/mzg 426\n"
Aug 26 17:06:21.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6560 --since=24h'
Aug 26 17:06:21.584: INFO: stderr: ""
Aug 26 17:06:21.584: INFO: stdout: "I0826 17:06:13.993266       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/z87 278\nI0826 17:06:14.193391       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/rtl 279\nI0826 17:06:14.393436       1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/htn8 294\nI0826 17:06:14.593451       1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/z9q 430\nI0826 17:06:14.793475       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/hkh 274\nI0826 17:06:14.993432       1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/f8z 542\nI0826 17:06:15.193469       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/4xkb 574\nI0826 17:06:15.393420       1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/pfvm 486\nI0826 17:06:15.593442       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/mhk 500\nI0826 17:06:15.793438       1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/ktx 369\nI0826 17:06:15.993402       1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/kqg 442\nI0826 17:06:16.195613       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/jzf 535\nI0826 17:06:16.393526       1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/jnd 262\nI0826 17:06:16.593560       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/tnw 306\nI0826 17:06:16.793403       1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/hscn 513\nI0826 17:06:16.993495       1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/j5l 216\nI0826 17:06:17.193448       1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/4bp 243\nI0826 17:06:17.393436       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/pff 515\nI0826 17:06:17.593421       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/jpd 252\nI0826 17:06:17.793457       1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/r65 289\nI0826 17:06:17.993431       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/vqf 425\nI0826 17:06:18.193416       1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/cwf9 448\nI0826 17:06:18.393431       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/qcnp 518\nI0826 17:06:18.593442       1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/7lw7 540\nI0826 17:06:18.793416       1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/j9l5 340\nI0826 17:06:18.993433       1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/r5ph 480\nI0826 17:06:19.193444       1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/j7q 527\nI0826 17:06:19.393432       1 logs_generator.go:76] 27 GET /api/v1/namespaces/kube-system/pods/742 315\nI0826 17:06:19.593432       1 logs_generator.go:76] 28 PUT /api/v1/namespaces/kube-system/pods/pn2m 474\nI0826 17:06:19.793458       1 logs_generator.go:76] 29 GET /api/v1/namespaces/ns/pods/nnt8 565\nI0826 17:06:19.993412       1 logs_generator.go:76] 30 GET /api/v1/namespaces/default/pods/sjx 233\nI0826 17:06:20.196185       1 logs_generator.go:76] 31 GET /api/v1/namespaces/ns/pods/rjx4 253\nI0826 17:06:20.393440       1 logs_generator.go:76] 32 GET /api/v1/namespaces/kube-system/pods/gm6 545\nI0826 17:06:20.593477       1 logs_generator.go:76] 33 PUT /api/v1/namespaces/kube-system/pods/pzm 549\nI0826 17:06:20.793455       1 logs_generator.go:76] 34 PUT /api/v1/namespaces/kube-system/pods/64m4 424\nI0826 17:06:20.993435       1 logs_generator.go:76] 35 PUT /api/v1/namespaces/default/pods/rqk 533\nI0826 17:06:21.193414       1 logs_generator.go:76] 36 PUT /api/v1/namespaces/kube-system/pods/cjdv 230\nI0826 17:06:21.393391       1 logs_generator.go:76] 37 GET /api/v1/namespaces/kube-system/pods/mzg 426\n"
[AfterEach] Kubectl logs
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
Aug 26 17:06:21.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6560'
Aug 26 17:06:27.837: INFO: stderr: ""
Aug 26 17:06:27.837: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:06:27.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6560" for this suite.

• [SLOW TEST:21.392 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":118,"skipped":2046,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:06:28.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-4a343a43-a7ef-4fd3-b740-618034b60a8e
STEP: Creating a pod to test consume configMaps
Aug 26 17:06:30.164: INFO: Waiting up to 5m0s for pod "pod-configmaps-adedfbf2-51e0-46b6-a6d4-7e8d622c32c6" in namespace "configmap-4099" to be "Succeeded or Failed"
Aug 26 17:06:30.537: INFO: Pod "pod-configmaps-adedfbf2-51e0-46b6-a6d4-7e8d622c32c6": Phase="Pending", Reason="", readiness=false. Elapsed: 372.389704ms
Aug 26 17:06:32.728: INFO: Pod "pod-configmaps-adedfbf2-51e0-46b6-a6d4-7e8d622c32c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.563916606s
Aug 26 17:06:34.856: INFO: Pod "pod-configmaps-adedfbf2-51e0-46b6-a6d4-7e8d622c32c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.69186373s
Aug 26 17:06:36.866: INFO: Pod "pod-configmaps-adedfbf2-51e0-46b6-a6d4-7e8d622c32c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.701629226s
Aug 26 17:06:38.932: INFO: Pod "pod-configmaps-adedfbf2-51e0-46b6-a6d4-7e8d622c32c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.767634985s
Aug 26 17:06:40.937: INFO: Pod "pod-configmaps-adedfbf2-51e0-46b6-a6d4-7e8d622c32c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.772395853s
STEP: Saw pod success
Aug 26 17:06:40.937: INFO: Pod "pod-configmaps-adedfbf2-51e0-46b6-a6d4-7e8d622c32c6" satisfied condition "Succeeded or Failed"
Aug 26 17:06:40.940: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-adedfbf2-51e0-46b6-a6d4-7e8d622c32c6 container configmap-volume-test: 
STEP: delete the pod
Aug 26 17:06:41.090: INFO: Waiting for pod pod-configmaps-adedfbf2-51e0-46b6-a6d4-7e8d622c32c6 to disappear
Aug 26 17:06:41.118: INFO: Pod pod-configmaps-adedfbf2-51e0-46b6-a6d4-7e8d622c32c6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:06:41.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4099" for this suite.

• [SLOW TEST:13.075 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":2061,"failed":0}
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:06:41.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-8888
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 26 17:06:41.271: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 26 17:06:41.359: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 17:06:43.364: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 17:06:45.411: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 17:06:47.370: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 17:06:49.382: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 17:06:51.363: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 17:06:53.362: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 17:06:55.383: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 17:06:57.362: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 17:06:59.364: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 26 17:06:59.370: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 26 17:07:01.459: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 26 17:07:03.373: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 26 17:07:12.736: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.217:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8888 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 17:07:12.736: INFO: >>> kubeConfig: /root/.kube/config
I0826 17:07:12.909898       7 log.go:172] (0xc0030d5ad0) (0xc001606e60) Create stream
I0826 17:07:12.909928       7 log.go:172] (0xc0030d5ad0) (0xc001606e60) Stream added, broadcasting: 1
I0826 17:07:12.918847       7 log.go:172] (0xc0030d5ad0) Reply frame received for 1
I0826 17:07:12.918885       7 log.go:172] (0xc0030d5ad0) (0xc002100fa0) Create stream
I0826 17:07:12.918895       7 log.go:172] (0xc0030d5ad0) (0xc002100fa0) Stream added, broadcasting: 3
I0826 17:07:12.919536       7 log.go:172] (0xc0030d5ad0) Reply frame received for 3
I0826 17:07:12.919553       7 log.go:172] (0xc0030d5ad0) (0xc001414780) Create stream
I0826 17:07:12.919564       7 log.go:172] (0xc0030d5ad0) (0xc001414780) Stream added, broadcasting: 5
I0826 17:07:12.920152       7 log.go:172] (0xc0030d5ad0) Reply frame received for 5
I0826 17:07:12.997850       7 log.go:172] (0xc0030d5ad0) Data frame received for 3
I0826 17:07:12.997879       7 log.go:172] (0xc002100fa0) (3) Data frame handling
I0826 17:07:12.997901       7 log.go:172] (0xc002100fa0) (3) Data frame sent
I0826 17:07:12.998091       7 log.go:172] (0xc0030d5ad0) Data frame received for 3
I0826 17:07:12.998137       7 log.go:172] (0xc002100fa0) (3) Data frame handling
I0826 17:07:12.998222       7 log.go:172] (0xc0030d5ad0) Data frame received for 5
I0826 17:07:12.998241       7 log.go:172] (0xc001414780) (5) Data frame handling
I0826 17:07:12.999473       7 log.go:172] (0xc0030d5ad0) Data frame received for 1
I0826 17:07:12.999517       7 log.go:172] (0xc001606e60) (1) Data frame handling
I0826 17:07:12.999536       7 log.go:172] (0xc001606e60) (1) Data frame sent
I0826 17:07:12.999553       7 log.go:172] (0xc0030d5ad0) (0xc001606e60) Stream removed, broadcasting: 1
I0826 17:07:12.999566       7 log.go:172] (0xc0030d5ad0) Go away received
I0826 17:07:12.999655       7 log.go:172] (0xc0030d5ad0) (0xc001606e60) Stream removed, broadcasting: 1
I0826 17:07:12.999685       7 log.go:172] (0xc0030d5ad0) (0xc002100fa0) Stream removed, broadcasting: 3
I0826 17:07:12.999714       7 log.go:172] (0xc0030d5ad0) (0xc001414780) Stream removed, broadcasting: 5
Aug 26 17:07:12.999: INFO: Found all expected endpoints: [netserver-0]
Aug 26 17:07:13.432: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.32:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8888 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 17:07:13.432: INFO: >>> kubeConfig: /root/.kube/config
I0826 17:07:13.462474       7 log.go:172] (0xc003158000) (0xc0016075e0) Create stream
I0826 17:07:13.462498       7 log.go:172] (0xc003158000) (0xc0016075e0) Stream added, broadcasting: 1
I0826 17:07:13.464101       7 log.go:172] (0xc003158000) Reply frame received for 1
I0826 17:07:13.464136       7 log.go:172] (0xc003158000) (0xc00122e000) Create stream
I0826 17:07:13.464148       7 log.go:172] (0xc003158000) (0xc00122e000) Stream added, broadcasting: 3
I0826 17:07:13.464925       7 log.go:172] (0xc003158000) Reply frame received for 3
I0826 17:07:13.464955       7 log.go:172] (0xc003158000) (0xc00122e0a0) Create stream
I0826 17:07:13.464966       7 log.go:172] (0xc003158000) (0xc00122e0a0) Stream added, broadcasting: 5
I0826 17:07:13.465731       7 log.go:172] (0xc003158000) Reply frame received for 5
I0826 17:07:13.532327       7 log.go:172] (0xc003158000) Data frame received for 5
I0826 17:07:13.532347       7 log.go:172] (0xc00122e0a0) (5) Data frame handling
I0826 17:07:13.532364       7 log.go:172] (0xc003158000) Data frame received for 3
I0826 17:07:13.532375       7 log.go:172] (0xc00122e000) (3) Data frame handling
I0826 17:07:13.532384       7 log.go:172] (0xc00122e000) (3) Data frame sent
I0826 17:07:13.532390       7 log.go:172] (0xc003158000) Data frame received for 3
I0826 17:07:13.532396       7 log.go:172] (0xc00122e000) (3) Data frame handling
I0826 17:07:13.533387       7 log.go:172] (0xc003158000) Data frame received for 1
I0826 17:07:13.533403       7 log.go:172] (0xc0016075e0) (1) Data frame handling
I0826 17:07:13.533412       7 log.go:172] (0xc0016075e0) (1) Data frame sent
I0826 17:07:13.533430       7 log.go:172] (0xc003158000) (0xc0016075e0) Stream removed, broadcasting: 1
I0826 17:07:13.533458       7 log.go:172] (0xc003158000) Go away received
I0826 17:07:13.533517       7 log.go:172] (0xc003158000) (0xc0016075e0) Stream removed, broadcasting: 1
I0826 17:07:13.533528       7 log.go:172] (0xc003158000) (0xc00122e000) Stream removed, broadcasting: 3
I0826 17:07:13.533533       7 log.go:172] (0xc003158000) (0xc00122e0a0) Stream removed, broadcasting: 5
Aug 26 17:07:13.533: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:07:13.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8888" for this suite.

• [SLOW TEST:32.414 seconds]
[sig-network] Networking
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":120,"skipped":2065,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:07:13.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 26 17:07:14.373: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13c9a6aa-e8bb-47f8-9610-2990d0f25be0" in namespace "downward-api-6013" to be "Succeeded or Failed"
Aug 26 17:07:14.640: INFO: Pod "downwardapi-volume-13c9a6aa-e8bb-47f8-9610-2990d0f25be0": Phase="Pending", Reason="", readiness=false. Elapsed: 266.970255ms
Aug 26 17:07:16.645: INFO: Pod "downwardapi-volume-13c9a6aa-e8bb-47f8-9610-2990d0f25be0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271342246s
Aug 26 17:07:18.733: INFO: Pod "downwardapi-volume-13c9a6aa-e8bb-47f8-9610-2990d0f25be0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.359117744s
Aug 26 17:07:21.035: INFO: Pod "downwardapi-volume-13c9a6aa-e8bb-47f8-9610-2990d0f25be0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.661101895s
Aug 26 17:07:23.073: INFO: Pod "downwardapi-volume-13c9a6aa-e8bb-47f8-9610-2990d0f25be0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.699257613s
STEP: Saw pod success
Aug 26 17:07:23.073: INFO: Pod "downwardapi-volume-13c9a6aa-e8bb-47f8-9610-2990d0f25be0" satisfied condition "Succeeded or Failed"
Aug 26 17:07:23.417: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-13c9a6aa-e8bb-47f8-9610-2990d0f25be0 container client-container: 
STEP: delete the pod
Aug 26 17:07:24.912: INFO: Waiting for pod downwardapi-volume-13c9a6aa-e8bb-47f8-9610-2990d0f25be0 to disappear
Aug 26 17:07:24.922: INFO: Pod downwardapi-volume-13c9a6aa-e8bb-47f8-9610-2990d0f25be0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:07:24.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6013" for this suite.

• [SLOW TEST:11.526 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":2072,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:07:25.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-927/configmap-test-802b4ac4-3675-4734-8b92-7e67a7d8adc0
STEP: Creating a pod to test consume configMaps
Aug 26 17:07:26.485: INFO: Waiting up to 5m0s for pod "pod-configmaps-d294ad0d-dfce-474f-b776-160ff718e470" in namespace "configmap-927" to be "Succeeded or Failed"
Aug 26 17:07:26.488: INFO: Pod "pod-configmaps-d294ad0d-dfce-474f-b776-160ff718e470": Phase="Pending", Reason="", readiness=false. Elapsed: 3.549759ms
Aug 26 17:07:28.555: INFO: Pod "pod-configmaps-d294ad0d-dfce-474f-b776-160ff718e470": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070483531s
Aug 26 17:07:30.559: INFO: Pod "pod-configmaps-d294ad0d-dfce-474f-b776-160ff718e470": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074596381s
Aug 26 17:07:32.886: INFO: Pod "pod-configmaps-d294ad0d-dfce-474f-b776-160ff718e470": Phase="Pending", Reason="", readiness=false. Elapsed: 6.401155562s
Aug 26 17:07:34.890: INFO: Pod "pod-configmaps-d294ad0d-dfce-474f-b776-160ff718e470": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.40520398s
STEP: Saw pod success
Aug 26 17:07:34.890: INFO: Pod "pod-configmaps-d294ad0d-dfce-474f-b776-160ff718e470" satisfied condition "Succeeded or Failed"
Aug 26 17:07:34.893: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-d294ad0d-dfce-474f-b776-160ff718e470 container env-test: 
STEP: delete the pod
Aug 26 17:07:35.157: INFO: Waiting for pod pod-configmaps-d294ad0d-dfce-474f-b776-160ff718e470 to disappear
Aug 26 17:07:35.597: INFO: Pod pod-configmaps-d294ad0d-dfce-474f-b776-160ff718e470 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:07:35.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-927" for this suite.

• [SLOW TEST:11.055 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":122,"skipped":2097,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:07:36.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-71f86c7e-862c-4889-90bd-a6cd08051614
STEP: Creating configMap with name cm-test-opt-upd-fae35911-b7b1-419e-a153-eaadb2ddbc6d
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-71f86c7e-862c-4889-90bd-a6cd08051614
STEP: Updating configmap cm-test-opt-upd-fae35911-b7b1-419e-a153-eaadb2ddbc6d
STEP: Creating configMap with name cm-test-opt-create-48b556c9-a13f-4706-adf0-8310dfe1d884
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:08:56.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4781" for this suite.

• [SLOW TEST:80.166 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":2099,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:08:56.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
Aug 26 17:08:56.913: INFO: Waiting up to 5m0s for pod "var-expansion-6e0388c7-e472-40c8-8b70-0d8b34755c92" in namespace "var-expansion-2973" to be "Succeeded or Failed"
Aug 26 17:08:57.155: INFO: Pod "var-expansion-6e0388c7-e472-40c8-8b70-0d8b34755c92": Phase="Pending", Reason="", readiness=false. Elapsed: 242.645582ms
Aug 26 17:08:59.159: INFO: Pod "var-expansion-6e0388c7-e472-40c8-8b70-0d8b34755c92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2466643s
Aug 26 17:09:01.215: INFO: Pod "var-expansion-6e0388c7-e472-40c8-8b70-0d8b34755c92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30201105s
Aug 26 17:09:03.479: INFO: Pod "var-expansion-6e0388c7-e472-40c8-8b70-0d8b34755c92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.565979406s
Aug 26 17:09:05.483: INFO: Pod "var-expansion-6e0388c7-e472-40c8-8b70-0d8b34755c92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.570630284s
STEP: Saw pod success
Aug 26 17:09:05.483: INFO: Pod "var-expansion-6e0388c7-e472-40c8-8b70-0d8b34755c92" satisfied condition "Succeeded or Failed"
Aug 26 17:09:05.487: INFO: Trying to get logs from node kali-worker pod var-expansion-6e0388c7-e472-40c8-8b70-0d8b34755c92 container dapi-container: 
STEP: delete the pod
Aug 26 17:09:05.552: INFO: Waiting for pod var-expansion-6e0388c7-e472-40c8-8b70-0d8b34755c92 to disappear
Aug 26 17:09:05.564: INFO: Pod var-expansion-6e0388c7-e472-40c8-8b70-0d8b34755c92 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:09:05.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2973" for this suite.

• [SLOW TEST:9.284 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2114,"failed":0}
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:09:05.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0826 17:09:47.138262       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 17:09:47.138: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:09:47.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2678" for this suite.

• [SLOW TEST:41.575 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":125,"skipped":2114,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:09:47.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 26 17:09:48.295: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:10:10.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-744" for this suite.

• [SLOW TEST:24.111 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":126,"skipped":2153,"failed":0}
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:10:11.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 26 17:10:12.881: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:10:39.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2170" for this suite.

• [SLOW TEST:28.684 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":127,"skipped":2153,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:10:39.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 26 17:10:56.992: INFO: Successfully updated pod "pod-update-activedeadlineseconds-13e68ddf-8a5f-4f7c-8ddb-b0f35c38849f"
Aug 26 17:10:56.992: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-13e68ddf-8a5f-4f7c-8ddb-b0f35c38849f" in namespace "pods-9821" to be "terminated due to deadline exceeded"
Aug 26 17:10:57.019: INFO: Pod "pod-update-activedeadlineseconds-13e68ddf-8a5f-4f7c-8ddb-b0f35c38849f": Phase="Running", Reason="", readiness=true. Elapsed: 26.823182ms
Aug 26 17:10:59.023: INFO: Pod "pod-update-activedeadlineseconds-13e68ddf-8a5f-4f7c-8ddb-b0f35c38849f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.030322369s
Aug 26 17:10:59.023: INFO: Pod "pod-update-activedeadlineseconds-13e68ddf-8a5f-4f7c-8ddb-b0f35c38849f" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:10:59.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9821" for this suite.

• [SLOW TEST:19.089 seconds]
[k8s.io] Pods
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2193,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:10:59.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override all
Aug 26 17:10:59.121: INFO: Waiting up to 5m0s for pod "client-containers-234d0cee-e77d-42a2-95da-82042c1eaa41" in namespace "containers-779" to be "Succeeded or Failed"
Aug 26 17:10:59.149: INFO: Pod "client-containers-234d0cee-e77d-42a2-95da-82042c1eaa41": Phase="Pending", Reason="", readiness=false. Elapsed: 27.964896ms
Aug 26 17:11:01.154: INFO: Pod "client-containers-234d0cee-e77d-42a2-95da-82042c1eaa41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03231001s
Aug 26 17:11:03.158: INFO: Pod "client-containers-234d0cee-e77d-42a2-95da-82042c1eaa41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036739745s
STEP: Saw pod success
Aug 26 17:11:03.158: INFO: Pod "client-containers-234d0cee-e77d-42a2-95da-82042c1eaa41" satisfied condition "Succeeded or Failed"
Aug 26 17:11:03.161: INFO: Trying to get logs from node kali-worker2 pod client-containers-234d0cee-e77d-42a2-95da-82042c1eaa41 container test-container: 
STEP: delete the pod
Aug 26 17:11:03.308: INFO: Waiting for pod client-containers-234d0cee-e77d-42a2-95da-82042c1eaa41 to disappear
Aug 26 17:11:03.314: INFO: Pod client-containers-234d0cee-e77d-42a2-95da-82042c1eaa41 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:11:03.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-779" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":129,"skipped":2223,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:11:03.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-downwardapi-nlbw
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 17:11:03.395: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-nlbw" in namespace "subpath-8378" to be "Succeeded or Failed"
Aug 26 17:11:03.450: INFO: Pod "pod-subpath-test-downwardapi-nlbw": Phase="Pending", Reason="", readiness=false. Elapsed: 55.259762ms
Aug 26 17:11:05.455: INFO: Pod "pod-subpath-test-downwardapi-nlbw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059799468s
Aug 26 17:11:07.498: INFO: Pod "pod-subpath-test-downwardapi-nlbw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103093664s
Aug 26 17:11:09.502: INFO: Pod "pod-subpath-test-downwardapi-nlbw": Phase="Running", Reason="", readiness=true. Elapsed: 6.107524851s
Aug 26 17:11:11.506: INFO: Pod "pod-subpath-test-downwardapi-nlbw": Phase="Running", Reason="", readiness=true. Elapsed: 8.111268131s
Aug 26 17:11:13.523: INFO: Pod "pod-subpath-test-downwardapi-nlbw": Phase="Running", Reason="", readiness=true. Elapsed: 10.127870084s
Aug 26 17:11:15.529: INFO: Pod "pod-subpath-test-downwardapi-nlbw": Phase="Running", Reason="", readiness=true. Elapsed: 12.133954892s
Aug 26 17:11:17.533: INFO: Pod "pod-subpath-test-downwardapi-nlbw": Phase="Running", Reason="", readiness=true. Elapsed: 14.138520861s
Aug 26 17:11:19.537: INFO: Pod "pod-subpath-test-downwardapi-nlbw": Phase="Running", Reason="", readiness=true. Elapsed: 16.142410998s
Aug 26 17:11:21.541: INFO: Pod "pod-subpath-test-downwardapi-nlbw": Phase="Running", Reason="", readiness=true. Elapsed: 18.146378438s
Aug 26 17:11:23.545: INFO: Pod "pod-subpath-test-downwardapi-nlbw": Phase="Running", Reason="", readiness=true. Elapsed: 20.150095211s
Aug 26 17:11:25.548: INFO: Pod "pod-subpath-test-downwardapi-nlbw": Phase="Running", Reason="", readiness=true. Elapsed: 22.153197097s
Aug 26 17:11:27.618: INFO: Pod "pod-subpath-test-downwardapi-nlbw": Phase="Running", Reason="", readiness=true. Elapsed: 24.223549871s
Aug 26 17:11:29.622: INFO: Pod "pod-subpath-test-downwardapi-nlbw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.227650962s
STEP: Saw pod success
Aug 26 17:11:29.623: INFO: Pod "pod-subpath-test-downwardapi-nlbw" satisfied condition "Succeeded or Failed"
Aug 26 17:11:29.626: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-downwardapi-nlbw container test-container-subpath-downwardapi-nlbw: 
STEP: delete the pod
Aug 26 17:11:29.740: INFO: Waiting for pod pod-subpath-test-downwardapi-nlbw to disappear
Aug 26 17:11:29.748: INFO: Pod pod-subpath-test-downwardapi-nlbw no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-nlbw
Aug 26 17:11:29.748: INFO: Deleting pod "pod-subpath-test-downwardapi-nlbw" in namespace "subpath-8378"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:11:29.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8378" for this suite.

• [SLOW TEST:26.436 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":130,"skipped":2231,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:11:29.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 26 17:11:34.070: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 26 17:11:36.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734058694, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734058694, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734058694, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734058693, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:11:38.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734058694, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734058694, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734058694, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734058693, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:11:40.379: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734058694, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734058694, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734058694, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734058693, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 17:11:43.478: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:11:43.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:11:47.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-1828" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:21.181 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":131,"skipped":2255,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:11:50.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 26 17:12:03.305: INFO: 10 pods remaining
Aug 26 17:12:03.306: INFO: 0 pods has nil DeletionTimestamp
Aug 26 17:12:03.306: INFO: 
Aug 26 17:12:05.611: INFO: 0 pods remaining
Aug 26 17:12:05.611: INFO: 0 pods has nil DeletionTimestamp
Aug 26 17:12:05.611: INFO: 
Aug 26 17:12:06.955: INFO: 0 pods remaining
Aug 26 17:12:06.955: INFO: 0 pods has nil DeletionTimestamp
Aug 26 17:12:06.955: INFO: 
Aug 26 17:12:07.263: INFO: 0 pods remaining
Aug 26 17:12:07.263: INFO: 0 pods has nil DeletionTimestamp
Aug 26 17:12:07.263: INFO: 
STEP: Gathering metrics
W0826 17:12:09.136456       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 17:12:09.136: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:12:09.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3907" for this suite.

• [SLOW TEST:18.213 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":132,"skipped":2258,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:12:09.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 26 17:12:10.825: INFO: Waiting up to 5m0s for pod "downward-api-2c39a583-9941-4767-8c05-2b7fc0ce6c51" in namespace "downward-api-8764" to be "Succeeded or Failed"
Aug 26 17:12:11.497: INFO: Pod "downward-api-2c39a583-9941-4767-8c05-2b7fc0ce6c51": Phase="Pending", Reason="", readiness=false. Elapsed: 671.614079ms
Aug 26 17:12:13.501: INFO: Pod "downward-api-2c39a583-9941-4767-8c05-2b7fc0ce6c51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.675478066s
Aug 26 17:12:16.296: INFO: Pod "downward-api-2c39a583-9941-4767-8c05-2b7fc0ce6c51": Phase="Pending", Reason="", readiness=false. Elapsed: 5.47040614s
Aug 26 17:12:18.320: INFO: Pod "downward-api-2c39a583-9941-4767-8c05-2b7fc0ce6c51": Phase="Pending", Reason="", readiness=false. Elapsed: 7.49481925s
Aug 26 17:12:20.624: INFO: Pod "downward-api-2c39a583-9941-4767-8c05-2b7fc0ce6c51": Phase="Running", Reason="", readiness=true. Elapsed: 9.799002006s
Aug 26 17:12:22.666: INFO: Pod "downward-api-2c39a583-9941-4767-8c05-2b7fc0ce6c51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.840480505s
STEP: Saw pod success
Aug 26 17:12:22.666: INFO: Pod "downward-api-2c39a583-9941-4767-8c05-2b7fc0ce6c51" satisfied condition "Succeeded or Failed"
Aug 26 17:12:22.668: INFO: Trying to get logs from node kali-worker pod downward-api-2c39a583-9941-4767-8c05-2b7fc0ce6c51 container dapi-container: 
STEP: delete the pod
Aug 26 17:12:23.186: INFO: Waiting for pod downward-api-2c39a583-9941-4767-8c05-2b7fc0ce6c51 to disappear
Aug 26 17:12:23.488: INFO: Pod downward-api-2c39a583-9941-4767-8c05-2b7fc0ce6c51 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:12:23.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8764" for this suite.

• [SLOW TEST:14.707 seconds]
[sig-node] Downward API
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":133,"skipped":2285,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:12:23.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service endpoint-test2 in namespace services-3159
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3159 to expose endpoints map[]
Aug 26 17:12:24.688: INFO: Get endpoints failed (3.293115ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Aug 26 17:12:25.925: INFO: successfully validated that service endpoint-test2 in namespace services-3159 exposes endpoints map[] (1.23945274s elapsed)
STEP: Creating pod pod1 in namespace services-3159
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3159 to expose endpoints map[pod1:[80]]
Aug 26 17:12:31.102: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.633012729s elapsed, will retry)
Aug 26 17:12:37.678: INFO: successfully validated that service endpoint-test2 in namespace services-3159 exposes endpoints map[pod1:[80]] (11.208911577s elapsed)
STEP: Creating pod pod2 in namespace services-3159
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3159 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 26 17:12:44.047: INFO: Unexpected endpoints: found map[0bd8423e-b37b-4bbb-9d62-270c05c083f3:[80]], expected map[pod1:[80] pod2:[80]] (6.365921736s elapsed, will retry)
Aug 26 17:12:47.513: INFO: successfully validated that service endpoint-test2 in namespace services-3159 exposes endpoints map[pod1:[80] pod2:[80]] (9.831162415s elapsed)
STEP: Deleting pod pod1 in namespace services-3159
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3159 to expose endpoints map[pod2:[80]]
Aug 26 17:12:47.938: INFO: successfully validated that service endpoint-test2 in namespace services-3159 exposes endpoints map[pod2:[80]] (420.093764ms elapsed)
STEP: Deleting pod pod2 in namespace services-3159
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3159 to expose endpoints map[]
Aug 26 17:12:49.857: INFO: successfully validated that service endpoint-test2 in namespace services-3159 exposes endpoints map[] (1.453466769s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:12:51.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3159" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:27.409 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":275,"completed":134,"skipped":2290,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:12:51.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service nodeport-test with type=NodePort in namespace services-8095
STEP: creating replication controller nodeport-test in namespace services-8095
I0826 17:12:52.836015       7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-8095, replica count: 2
I0826 17:12:55.886557       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 17:12:58.886793       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 17:13:01.887083       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 26 17:13:01.887: INFO: Creating new exec pod
Aug 26 17:13:06.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-8095 execpodn5dmw -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Aug 26 17:13:07.140: INFO: stderr: "I0826 17:13:07.041035    3439 log.go:172] (0xc000a5c630) (0xc0009fc3c0) Create stream\nI0826 17:13:07.041111    3439 log.go:172] (0xc000a5c630) (0xc0009fc3c0) Stream added, broadcasting: 1\nI0826 17:13:07.045654    3439 log.go:172] (0xc000a5c630) Reply frame received for 1\nI0826 17:13:07.045689    3439 log.go:172] (0xc000a5c630) (0xc00054d860) Create stream\nI0826 17:13:07.045699    3439 log.go:172] (0xc000a5c630) (0xc00054d860) Stream added, broadcasting: 3\nI0826 17:13:07.047010    3439 log.go:172] (0xc000a5c630) Reply frame received for 3\nI0826 17:13:07.047083    3439 log.go:172] (0xc000a5c630) (0xc000296c80) Create stream\nI0826 17:13:07.047100    3439 log.go:172] (0xc000a5c630) (0xc000296c80) Stream added, broadcasting: 5\nI0826 17:13:07.047936    3439 log.go:172] (0xc000a5c630) Reply frame received for 5\nI0826 17:13:07.128719    3439 log.go:172] (0xc000a5c630) Data frame received for 3\nI0826 17:13:07.128892    3439 log.go:172] (0xc000a5c630) Data frame received for 5\nI0826 17:13:07.128925    3439 log.go:172] (0xc000296c80) (5) Data frame handling\nI0826 17:13:07.128937    3439 log.go:172] (0xc000296c80) (5) Data frame sent\nI0826 17:13:07.128944    3439 log.go:172] (0xc000a5c630) Data frame received for 5\nI0826 17:13:07.128950    3439 log.go:172] (0xc000296c80) (5) Data frame handling\nI0826 17:13:07.128968    3439 log.go:172] (0xc00054d860) (3) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0826 17:13:07.130828    3439 log.go:172] (0xc000a5c630) Data frame received for 1\nI0826 17:13:07.130871    3439 log.go:172] (0xc0009fc3c0) (1) Data frame handling\nI0826 17:13:07.130916    3439 log.go:172] (0xc0009fc3c0) (1) Data frame sent\nI0826 17:13:07.130956    3439 log.go:172] (0xc000a5c630) (0xc0009fc3c0) Stream removed, broadcasting: 1\nI0826 17:13:07.130992    3439 log.go:172] (0xc000a5c630) Go away received\nI0826 17:13:07.131543    3439 log.go:172] (0xc000a5c630) (0xc0009fc3c0) Stream removed, broadcasting: 1\nI0826 17:13:07.131568    3439 log.go:172] (0xc000a5c630) (0xc00054d860) Stream removed, broadcasting: 3\nI0826 17:13:07.131581    3439 log.go:172] (0xc000a5c630) (0xc000296c80) Stream removed, broadcasting: 5\n"
Aug 26 17:13:07.140: INFO: stdout: ""
Aug 26 17:13:07.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-8095 execpodn5dmw -- /bin/sh -x -c nc -zv -t -w 2 10.104.4.66 80'
Aug 26 17:13:07.342: INFO: stderr: "I0826 17:13:07.255261    3461 log.go:172] (0xc000a186e0) (0xc0009f83c0) Create stream\nI0826 17:13:07.255440    3461 log.go:172] (0xc000a186e0) (0xc0009f83c0) Stream added, broadcasting: 1\nI0826 17:13:07.260160    3461 log.go:172] (0xc000a186e0) Reply frame received for 1\nI0826 17:13:07.260214    3461 log.go:172] (0xc000a186e0) (0xc0006795e0) Create stream\nI0826 17:13:07.260232    3461 log.go:172] (0xc000a186e0) (0xc0006795e0) Stream added, broadcasting: 3\nI0826 17:13:07.261391    3461 log.go:172] (0xc000a186e0) Reply frame received for 3\nI0826 17:13:07.261452    3461 log.go:172] (0xc000a186e0) (0xc000518a00) Create stream\nI0826 17:13:07.261466    3461 log.go:172] (0xc000a186e0) (0xc000518a00) Stream added, broadcasting: 5\nI0826 17:13:07.262626    3461 log.go:172] (0xc000a186e0) Reply frame received for 5\nI0826 17:13:07.331168    3461 log.go:172] (0xc000a186e0) Data frame received for 3\nI0826 17:13:07.331199    3461 log.go:172] (0xc0006795e0) (3) Data frame handling\nI0826 17:13:07.331246    3461 log.go:172] (0xc000a186e0) Data frame received for 5\nI0826 17:13:07.331276    3461 log.go:172] (0xc000518a00) (5) Data frame handling\nI0826 17:13:07.331297    3461 log.go:172] (0xc000518a00) (5) Data frame sent\nI0826 17:13:07.331312    3461 log.go:172] (0xc000a186e0) Data frame received for 5\n+ nc -zv -t -w 2 10.104.4.66 80\nConnection to 10.104.4.66 80 port [tcp/http] succeeded!\nI0826 17:13:07.331323    3461 log.go:172] (0xc000518a00) (5) Data frame handling\nI0826 17:13:07.332886    3461 log.go:172] (0xc000a186e0) Data frame received for 1\nI0826 17:13:07.332910    3461 log.go:172] (0xc0009f83c0) (1) Data frame handling\nI0826 17:13:07.332924    3461 log.go:172] (0xc0009f83c0) (1) Data frame sent\nI0826 17:13:07.332936    3461 log.go:172] (0xc000a186e0) (0xc0009f83c0) Stream removed, broadcasting: 1\nI0826 17:13:07.333009    3461 log.go:172] (0xc000a186e0) Go away received\nI0826 17:13:07.333240    3461 log.go:172] (0xc000a186e0) (0xc0009f83c0) Stream removed, broadcasting: 1\nI0826 17:13:07.333257    3461 log.go:172] (0xc000a186e0) (0xc0006795e0) Stream removed, broadcasting: 3\nI0826 17:13:07.333266    3461 log.go:172] (0xc000a186e0) (0xc000518a00) Stream removed, broadcasting: 5\n"
Aug 26 17:13:07.342: INFO: stdout: ""
Aug 26 17:13:07.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-8095 execpodn5dmw -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30577'
Aug 26 17:13:07.573: INFO: stderr: "I0826 17:13:07.478188    3482 log.go:172] (0xc00090e4d0) (0xc000992140) Create stream\nI0826 17:13:07.478246    3482 log.go:172] (0xc00090e4d0) (0xc000992140) Stream added, broadcasting: 1\nI0826 17:13:07.480710    3482 log.go:172] (0xc00090e4d0) Reply frame received for 1\nI0826 17:13:07.480814    3482 log.go:172] (0xc00090e4d0) (0xc000693360) Create stream\nI0826 17:13:07.480830    3482 log.go:172] (0xc00090e4d0) (0xc000693360) Stream added, broadcasting: 3\nI0826 17:13:07.481928    3482 log.go:172] (0xc00090e4d0) Reply frame received for 3\nI0826 17:13:07.481969    3482 log.go:172] (0xc00090e4d0) (0xc000992280) Create stream\nI0826 17:13:07.481984    3482 log.go:172] (0xc00090e4d0) (0xc000992280) Stream added, broadcasting: 5\nI0826 17:13:07.482963    3482 log.go:172] (0xc00090e4d0) Reply frame received for 5\nI0826 17:13:07.566613    3482 log.go:172] (0xc00090e4d0) Data frame received for 3\nI0826 17:13:07.566661    3482 log.go:172] (0xc00090e4d0) Data frame received for 5\nI0826 17:13:07.566706    3482 log.go:172] (0xc000992280) (5) Data frame handling\nI0826 17:13:07.566732    3482 log.go:172] (0xc000992280) (5) Data frame sent\nI0826 17:13:07.566748    3482 log.go:172] (0xc00090e4d0) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.15 30577\nConnection to 172.18.0.15 30577 port [tcp/30577] succeeded!\nI0826 17:13:07.566759    3482 log.go:172] (0xc000992280) (5) Data frame handling\nI0826 17:13:07.566791    3482 log.go:172] (0xc000693360) (3) Data frame handling\nI0826 17:13:07.568639    3482 log.go:172] (0xc00090e4d0) Data frame received for 1\nI0826 17:13:07.568664    3482 log.go:172] (0xc000992140) (1) Data frame handling\nI0826 17:13:07.568690    3482 log.go:172] (0xc000992140) (1) Data frame sent\nI0826 17:13:07.568824    3482 log.go:172] (0xc00090e4d0) (0xc000992140) Stream removed, broadcasting: 1\nI0826 17:13:07.568864    3482 log.go:172] (0xc00090e4d0) Go away received\nI0826 17:13:07.569213    3482 log.go:172] (0xc00090e4d0) (0xc000992140) Stream removed, broadcasting: 1\nI0826 17:13:07.569236    3482 log.go:172] (0xc00090e4d0) (0xc000693360) Stream removed, broadcasting: 3\nI0826 17:13:07.569250    3482 log.go:172] (0xc00090e4d0) (0xc000992280) Stream removed, broadcasting: 5\n"
Aug 26 17:13:07.573: INFO: stdout: ""
Aug 26 17:13:07.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-8095 execpodn5dmw -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30577'
Aug 26 17:13:07.781: INFO: stderr: "I0826 17:13:07.695849    3504 log.go:172] (0xc000b4c160) (0xc000889360) Create stream\nI0826 17:13:07.695910    3504 log.go:172] (0xc000b4c160) (0xc000889360) Stream added, broadcasting: 1\nI0826 17:13:07.698882    3504 log.go:172] (0xc000b4c160) Reply frame received for 1\nI0826 17:13:07.698927    3504 log.go:172] (0xc000b4c160) (0xc00063b7c0) Create stream\nI0826 17:13:07.698941    3504 log.go:172] (0xc000b4c160) (0xc00063b7c0) Stream added, broadcasting: 3\nI0826 17:13:07.700103    3504 log.go:172] (0xc000b4c160) Reply frame received for 3\nI0826 17:13:07.700138    3504 log.go:172] (0xc000b4c160) (0xc00082a000) Create stream\nI0826 17:13:07.700151    3504 log.go:172] (0xc000b4c160) (0xc00082a000) Stream added, broadcasting: 5\nI0826 17:13:07.701251    3504 log.go:172] (0xc000b4c160) Reply frame received for 5\nI0826 17:13:07.769666    3504 log.go:172] (0xc000b4c160) Data frame received for 3\nI0826 17:13:07.769706    3504 log.go:172] (0xc00063b7c0) (3) Data frame handling\nI0826 17:13:07.769745    3504 log.go:172] (0xc000b4c160) Data frame received for 5\nI0826 17:13:07.769765    3504 log.go:172] (0xc00082a000) (5) Data frame handling\nI0826 17:13:07.769777    3504 log.go:172] (0xc00082a000) (5) Data frame sent\nI0826 17:13:07.769786    3504 log.go:172] (0xc000b4c160) Data frame received for 5\nI0826 17:13:07.769794    3504 log.go:172] (0xc00082a000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 30577\nConnection to 172.18.0.13 30577 port [tcp/30577] succeeded!\nI0826 17:13:07.771580    3504 log.go:172] (0xc000b4c160) Data frame received for 1\nI0826 17:13:07.771611    3504 log.go:172] (0xc000889360) (1) Data frame handling\nI0826 17:13:07.771631    3504 log.go:172] (0xc000889360) (1) Data frame sent\nI0826 17:13:07.771660    3504 log.go:172] (0xc000b4c160) (0xc000889360) Stream removed, broadcasting: 1\nI0826 17:13:07.771690    3504 log.go:172] (0xc000b4c160) Go away received\nI0826 17:13:07.772105    3504 log.go:172] (0xc000b4c160) (0xc000889360) Stream removed, broadcasting: 1\nI0826 17:13:07.772133    3504 log.go:172] (0xc000b4c160) (0xc00063b7c0) Stream removed, broadcasting: 3\nI0826 17:13:07.772157    3504 log.go:172] (0xc000b4c160) (0xc00082a000) Stream removed, broadcasting: 5\n"
Aug 26 17:13:07.781: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:13:07.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8095" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:16.517 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":135,"skipped":2358,"failed":0}
SSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:13:07.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:13:07.907: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-c5271f73-17c7-4a6c-bbd3-5991a7d8c6de" in namespace "security-context-test-6712" to be "Succeeded or Failed"
Aug 26 17:13:07.944: INFO: Pod "busybox-privileged-false-c5271f73-17c7-4a6c-bbd3-5991a7d8c6de": Phase="Pending", Reason="", readiness=false. Elapsed: 36.679834ms
Aug 26 17:13:09.948: INFO: Pod "busybox-privileged-false-c5271f73-17c7-4a6c-bbd3-5991a7d8c6de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040936473s
Aug 26 17:13:11.953: INFO: Pod "busybox-privileged-false-c5271f73-17c7-4a6c-bbd3-5991a7d8c6de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045955294s
Aug 26 17:13:11.953: INFO: Pod "busybox-privileged-false-c5271f73-17c7-4a6c-bbd3-5991a7d8c6de" satisfied condition "Succeeded or Failed"
Aug 26 17:13:11.960: INFO: Got logs for pod "busybox-privileged-false-c5271f73-17c7-4a6c-bbd3-5991a7d8c6de": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:13:11.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6712" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2364,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:13:11.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-940
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-940 to expose endpoints map[]
Aug 26 17:13:12.265: INFO: Get endpoints failed (5.22942ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Aug 26 17:13:13.271: INFO: successfully validated that service multi-endpoint-test in namespace services-940 exposes endpoints map[] (1.010565185s elapsed)
STEP: Creating pod pod1 in namespace services-940
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-940 to expose endpoints map[pod1:[100]]
Aug 26 17:13:17.744: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.425746036s elapsed, will retry)
Aug 26 17:13:18.753: INFO: successfully validated that service multi-endpoint-test in namespace services-940 exposes endpoints map[pod1:[100]] (5.434178263s elapsed)
STEP: Creating pod pod2 in namespace services-940
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-940 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 26 17:13:23.081: INFO: successfully validated that service multi-endpoint-test in namespace services-940 exposes endpoints map[pod1:[100] pod2:[101]] (4.322811388s elapsed)
STEP: Deleting pod pod1 in namespace services-940
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-940 to expose endpoints map[pod2:[101]]
Aug 26 17:13:25.267: INFO: successfully validated that service multi-endpoint-test in namespace services-940 exposes endpoints map[pod2:[101]] (2.181630821s elapsed)
STEP: Deleting pod pod2 in namespace services-940
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-940 to expose endpoints map[]
Aug 26 17:13:26.303: INFO: successfully validated that service multi-endpoint-test in namespace services-940 exposes endpoints map[] (1.031873249s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:13:26.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-940" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:15.079 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":137,"skipped":2367,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:13:27.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:13:45.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6080" for this suite.

• [SLOW TEST:18.309 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":138,"skipped":2430,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:13:45.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Aug 26 17:13:46.746: INFO: >>> kubeConfig: /root/.kube/config
Aug 26 17:13:50.333: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:14:03.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-179" for this suite.

• [SLOW TEST:18.102 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":139,"skipped":2437,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:14:03.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Aug 26 17:14:03.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:14:18.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6803" for this suite.

• [SLOW TEST:15.164 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":140,"skipped":2472,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:14:18.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-de549152-1f33-4d9d-91f2-08e7ff0a5034
STEP: Creating a pod to test consume configMaps
Aug 26 17:14:18.967: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bcfd447a-4002-4958-af70-c8169759bf59" in namespace "projected-2461" to be "Succeeded or Failed"
Aug 26 17:14:19.018: INFO: Pod "pod-projected-configmaps-bcfd447a-4002-4958-af70-c8169759bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 51.142235ms
Aug 26 17:14:21.022: INFO: Pod "pod-projected-configmaps-bcfd447a-4002-4958-af70-c8169759bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054794503s
Aug 26 17:14:23.098: INFO: Pod "pod-projected-configmaps-bcfd447a-4002-4958-af70-c8169759bf59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131269849s
Aug 26 17:14:25.103: INFO: Pod "pod-projected-configmaps-bcfd447a-4002-4958-af70-c8169759bf59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.135971761s
STEP: Saw pod success
Aug 26 17:14:25.103: INFO: Pod "pod-projected-configmaps-bcfd447a-4002-4958-af70-c8169759bf59" satisfied condition "Succeeded or Failed"
Aug 26 17:14:25.105: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-bcfd447a-4002-4958-af70-c8169759bf59 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 17:14:25.140: INFO: Waiting for pod pod-projected-configmaps-bcfd447a-4002-4958-af70-c8169759bf59 to disappear
Aug 26 17:14:25.156: INFO: Pod pod-projected-configmaps-bcfd447a-4002-4958-af70-c8169759bf59 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:14:25.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2461" for this suite.

• [SLOW TEST:6.539 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2476,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:14:25.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-ef89c6fd-e6c9-4545-9a42-66827fc77466
STEP: Creating a pod to test consume configMaps
Aug 26 17:14:25.346: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cf367154-d08f-435e-bbf7-6d045ab83750" in namespace "projected-8655" to be "Succeeded or Failed"
Aug 26 17:14:25.351: INFO: Pod "pod-projected-configmaps-cf367154-d08f-435e-bbf7-6d045ab83750": Phase="Pending", Reason="", readiness=false. Elapsed: 5.038707ms
Aug 26 17:14:27.454: INFO: Pod "pod-projected-configmaps-cf367154-d08f-435e-bbf7-6d045ab83750": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10788907s
Aug 26 17:14:29.459: INFO: Pod "pod-projected-configmaps-cf367154-d08f-435e-bbf7-6d045ab83750": Phase="Running", Reason="", readiness=true. Elapsed: 4.112584601s
Aug 26 17:14:31.464: INFO: Pod "pod-projected-configmaps-cf367154-d08f-435e-bbf7-6d045ab83750": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117370387s
STEP: Saw pod success
Aug 26 17:14:31.464: INFO: Pod "pod-projected-configmaps-cf367154-d08f-435e-bbf7-6d045ab83750" satisfied condition "Succeeded or Failed"
Aug 26 17:14:31.467: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-cf367154-d08f-435e-bbf7-6d045ab83750 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 17:14:31.508: INFO: Waiting for pod pod-projected-configmaps-cf367154-d08f-435e-bbf7-6d045ab83750 to disappear
Aug 26 17:14:31.523: INFO: Pod pod-projected-configmaps-cf367154-d08f-435e-bbf7-6d045ab83750 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:14:31.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8655" for this suite.

• [SLOW TEST:6.369 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2481,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:14:31.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:14:31.634: INFO: Creating ReplicaSet my-hostname-basic-35452daf-5544-43b0-b791-6bab083c3304
Aug 26 17:14:31.656: INFO: Pod name my-hostname-basic-35452daf-5544-43b0-b791-6bab083c3304: Found 0 pods out of 1
Aug 26 17:14:36.718: INFO: Pod name my-hostname-basic-35452daf-5544-43b0-b791-6bab083c3304: Found 1 pods out of 1
Aug 26 17:14:36.718: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-35452daf-5544-43b0-b791-6bab083c3304" is running
Aug 26 17:14:36.721: INFO: Pod "my-hostname-basic-35452daf-5544-43b0-b791-6bab083c3304-b6d7n" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 17:14:31 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 17:14:35 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 17:14:35 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 17:14:31 +0000 UTC Reason: Message:}])
Aug 26 17:14:36.721: INFO: Trying to dial the pod
Aug 26 17:14:42.370: INFO: Controller my-hostname-basic-35452daf-5544-43b0-b791-6bab083c3304: Got expected result from replica 1 [my-hostname-basic-35452daf-5544-43b0-b791-6bab083c3304-b6d7n]: "my-hostname-basic-35452daf-5544-43b0-b791-6bab083c3304-b6d7n", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:14:42.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1537" for this suite.

• [SLOW TEST:10.923 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":143,"skipped":2490,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:14:42.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 26 17:14:43.277: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:14:59.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4533" for this suite.

• [SLOW TEST:17.319 seconds]
[k8s.io] Pods
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2507,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:14:59.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 26 17:15:30.552: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7399 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 17:15:30.552: INFO: >>> kubeConfig: /root/.kube/config
I0826 17:15:30.580146       7 log.go:172] (0xc0058b2790) (0xc0030c8320) Create stream
I0826 17:15:30.580186       7 log.go:172] (0xc0058b2790) (0xc0030c8320) Stream added, broadcasting: 1
I0826 17:15:30.582254       7 log.go:172] (0xc0058b2790) Reply frame received for 1
I0826 17:15:30.582288       7 log.go:172] (0xc0058b2790) (0xc00122f540) Create stream
I0826 17:15:30.582303       7 log.go:172] (0xc0058b2790) (0xc00122f540) Stream added, broadcasting: 3
I0826 17:15:30.583334       7 log.go:172] (0xc0058b2790) Reply frame received for 3
I0826 17:15:30.583367       7 log.go:172] (0xc0058b2790) (0xc0021cf860) Create stream
I0826 17:15:30.583379       7 log.go:172] (0xc0058b2790) (0xc0021cf860) Stream added, broadcasting: 5
I0826 17:15:30.584263       7 log.go:172] (0xc0058b2790) Reply frame received for 5
I0826 17:15:30.641959       7 log.go:172] (0xc0058b2790) Data frame received for 3
I0826 17:15:30.641990       7 log.go:172] (0xc00122f540) (3) Data frame handling
I0826 17:15:30.642002       7 log.go:172] (0xc00122f540) (3) Data frame sent
I0826 17:15:30.642010       7 log.go:172] (0xc0058b2790) Data frame received for 3
I0826 17:15:30.642015       7 log.go:172] (0xc00122f540) (3) Data frame handling
I0826 17:15:30.642036       7 log.go:172] (0xc0058b2790) Data frame received for 5
I0826 17:15:30.642046       7 log.go:172] (0xc0021cf860) (5) Data frame handling
I0826 17:15:30.642996       7 log.go:172] (0xc0058b2790) Data frame received for 1
I0826 17:15:30.643035       7 log.go:172] (0xc0030c8320) (1) Data frame handling
I0826 17:15:30.643053       7 log.go:172] (0xc0030c8320) (1) Data frame sent
I0826 17:15:30.643071       7 log.go:172] (0xc0058b2790) (0xc0030c8320) Stream removed, broadcasting: 1
I0826 17:15:30.643092       7 log.go:172] (0xc0058b2790) Go away received
I0826 17:15:30.644971       7 log.go:172] (0xc0058b2790) (0xc0030c8320) Stream removed, broadcasting: 1
I0826 17:15:30.645002       7 log.go:172] (0xc0058b2790) (0xc00122f540) Stream removed, broadcasting: 3
I0826 17:15:30.645024       7 log.go:172] (0xc0058b2790) (0xc0021cf860) Stream removed, broadcasting: 5
Aug 26 17:15:30.645: INFO: Exec stderr: ""
Aug 26 17:15:30.645: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7399 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 17:15:30.645: INFO: >>> kubeConfig: /root/.kube/config
I0826 17:15:30.671479       7 log.go:172] (0xc0058b20b0) (0xc0015183c0) Create stream
I0826 17:15:30.671510       7 log.go:172] (0xc0058b20b0) (0xc0015183c0) Stream added, broadcasting: 1
I0826 17:15:30.675012       7 log.go:172] (0xc0058b20b0) Reply frame received for 1
I0826 17:15:30.675064       7 log.go:172] (0xc0058b20b0) (0xc001184000) Create stream
I0826 17:15:30.675079       7 log.go:172] (0xc0058b20b0) (0xc001184000) Stream added, broadcasting: 3
I0826 17:15:30.675932       7 log.go:172] (0xc0058b20b0) Reply frame received for 3
I0826 17:15:30.675951       7 log.go:172] (0xc0058b20b0) (0xc000358780) Create stream
I0826 17:15:30.675960       7 log.go:172] (0xc0058b20b0) (0xc000358780) Stream added, broadcasting: 5
I0826 17:15:30.676974       7 log.go:172] (0xc0058b20b0) Reply frame received for 5
I0826 17:15:30.736904       7 log.go:172] (0xc0058b20b0) Data frame received for 5
I0826 17:15:30.736955       7 log.go:172] (0xc000358780) (5) Data frame handling
I0826 17:15:30.736991       7 log.go:172] (0xc0058b20b0) Data frame received for 3
I0826 17:15:30.737015       7 log.go:172] (0xc001184000) (3) Data frame handling
I0826 17:15:30.737040       7 log.go:172] (0xc001184000) (3) Data frame sent
I0826 17:15:30.737059       7 log.go:172] (0xc0058b20b0) Data frame received for 3
I0826 17:15:30.737078       7 log.go:172] (0xc001184000) (3) Data frame handling
I0826 17:15:30.737880       7 log.go:172] (0xc0058b20b0) Data frame received for 1
I0826 17:15:30.737896       7 log.go:172] (0xc0015183c0) (1) Data frame handling
I0826 17:15:30.737906       7 log.go:172] (0xc0015183c0) (1) Data frame sent
I0826 17:15:30.738008       7 log.go:172] (0xc0058b20b0) (0xc0015183c0) Stream removed, broadcasting: 1
I0826 17:15:30.738068       7 log.go:172] (0xc0058b20b0) (0xc0015183c0) Stream removed, broadcasting: 1
I0826 17:15:30.738084       7 log.go:172] (0xc0058b20b0) (0xc001184000) Stream removed, broadcasting: 3
I0826 17:15:30.738096       7 log.go:172] (0xc0058b20b0) (0xc000358780) Stream removed, broadcasting: 5
Aug 26 17:15:30.738: INFO: Exec stderr: ""
Aug 26 17:15:30.738: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7399 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 17:15:30.738: INFO: >>> kubeConfig: /root/.kube/config
I0826 17:15:30.739305       7 log.go:172] (0xc0058b20b0) Go away received
I0826 17:15:30.764858       7 log.go:172] (0xc0058b24d0) (0xc001518780) Create stream
I0826 17:15:30.764889       7 log.go:172] (0xc0058b24d0) (0xc001518780) Stream added, broadcasting: 1
I0826 17:15:30.766150       7 log.go:172] (0xc0058b24d0) Reply frame received for 1
I0826 17:15:30.766187       7 log.go:172] (0xc0058b24d0) (0xc001518960) Create stream
I0826 17:15:30.766196       7 log.go:172] (0xc0058b24d0) (0xc001518960) Stream added, broadcasting: 3
I0826 17:15:30.766952       7 log.go:172] (0xc0058b24d0) Reply frame received for 3
I0826 17:15:30.766978       7 log.go:172] (0xc0058b24d0) (0xc000187360) Create stream
I0826 17:15:30.766987       7 log.go:172] (0xc0058b24d0) (0xc000187360) Stream added, broadcasting: 5
I0826 17:15:30.767684       7 log.go:172] (0xc0058b24d0) Reply frame received for 5
I0826 17:15:30.830943       7 log.go:172] (0xc0058b24d0) Data frame received for 5
I0826 17:15:30.830971       7 log.go:172] (0xc000187360) (5) Data frame handling
I0826 17:15:30.831004       7 log.go:172] (0xc0058b24d0) Data frame received for 3
I0826 17:15:30.831023       7 log.go:172] (0xc001518960) (3) Data frame handling
I0826 17:15:30.831040       7 log.go:172] (0xc001518960) (3) Data frame sent
I0826 17:15:30.831055       7 log.go:172] (0xc0058b24d0) Data frame received for 3
I0826 17:15:30.831063       7 log.go:172] (0xc001518960) (3) Data frame handling
I0826 17:15:30.831904       7 log.go:172] (0xc0058b24d0) Data frame received for 1
I0826 17:15:30.831921       7 log.go:172] (0xc001518780) (1) Data frame handling
I0826 17:15:30.831937       7 log.go:172] (0xc001518780) (1) Data frame sent
I0826 17:15:30.831973       7 log.go:172] (0xc0058b24d0) (0xc001518780) Stream removed, broadcasting: 1
I0826 17:15:30.831995       7 log.go:172] (0xc0058b24d0) Go away received
I0826 17:15:30.832070       7 log.go:172] (0xc0058b24d0) (0xc001518780) Stream removed, broadcasting: 1
I0826 17:15:30.832088       7 log.go:172] (0xc0058b24d0) (0xc001518960) Stream removed, broadcasting: 3
I0826 17:15:30.832096       7 log.go:172] (0xc0058b24d0) (0xc000187360) Stream removed, broadcasting: 5
Aug 26 17:15:30.832: INFO: Exec stderr: ""
Aug 26 17:15:30.832: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7399 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 17:15:30.832: INFO: >>> kubeConfig: /root/.kube/config
I0826 17:15:30.856146       7 log.go:172] (0xc0058b3080) (0xc001518c80) Create stream
I0826 17:15:30.856171       7 log.go:172] (0xc0058b3080) (0xc001518c80) Stream added, broadcasting: 1
I0826 17:15:30.857794       7 log.go:172] (0xc0058b3080) Reply frame received for 1
I0826 17:15:30.857817       7 log.go:172] (0xc0058b3080) (0xc000633680) Create stream
I0826 17:15:30.857825       7 log.go:172] (0xc0058b3080) (0xc000633680) Stream added, broadcasting: 3
I0826 17:15:30.858527       7 log.go:172] (0xc0058b3080) Reply frame received for 3
I0826 17:15:30.858552       7 log.go:172] (0xc0058b3080) (0xc001414000) Create stream
I0826 17:15:30.858561       7 log.go:172] (0xc0058b3080) (0xc001414000) Stream added, broadcasting: 5
I0826 17:15:30.859192       7 log.go:172] (0xc0058b3080) Reply frame received for 5
I0826 17:15:30.903254       7 log.go:172] (0xc0058b3080) Data frame received for 5
I0826 17:15:30.903282       7 log.go:172] (0xc001414000) (5) Data frame handling
I0826 17:15:30.903299       7 log.go:172] (0xc0058b3080) Data frame received for 3
I0826 17:15:30.903308       7 log.go:172] (0xc000633680) (3) Data frame handling
I0826 17:15:30.903318       7 log.go:172] (0xc000633680) (3) Data frame sent
I0826 17:15:30.904500       7 log.go:172] (0xc0058b3080) Data frame received for 3
I0826 17:15:30.904532       7 log.go:172] (0xc000633680) (3) Data frame handling
I0826 17:15:30.904544       7 log.go:172] (0xc0058b3080) Data frame received for 1
I0826 17:15:30.904550       7 log.go:172] (0xc001518c80) (1) Data frame handling
I0826 17:15:30.904555       7 log.go:172] (0xc001518c80) (1) Data frame sent
I0826 17:15:30.904561       7 log.go:172] (0xc0058b3080) (0xc001518c80) Stream removed, broadcasting: 1
I0826 17:15:30.904568       7 log.go:172] (0xc0058b3080) Go away received
I0826 17:15:30.904685       7 log.go:172] (0xc0058b3080) (0xc001518c80) Stream removed, broadcasting: 1
I0826 17:15:30.904702       7 log.go:172] (0xc0058b3080) (0xc000633680) Stream removed, broadcasting: 3
I0826 17:15:30.904715       7 log.go:172] (0xc0058b3080) (0xc001414000) Stream removed, broadcasting: 5
Aug 26 17:15:30.904: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 26 17:15:30.904: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7399 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 17:15:30.904: INFO: >>> kubeConfig: /root/.kube/config
I0826 17:15:31.959794       7 log.go:172] (0xc001db0630) (0xc001415860) Create stream
I0826 17:15:31.959826       7 log.go:172] (0xc001db0630) (0xc001415860) Stream added, broadcasting: 1
I0826 17:15:31.961992       7 log.go:172] (0xc001db0630) Reply frame received for 1
I0826 17:15:31.962022       7 log.go:172] (0xc001db0630) (0xc001518d20) Create stream
I0826 17:15:31.962036       7 log.go:172] (0xc001db0630) (0xc001518d20) Stream added, broadcasting: 3
I0826 17:15:31.963057       7 log.go:172] (0xc001db0630) Reply frame received for 3
I0826 17:15:31.963099       7 log.go:172] (0xc001db0630) (0xc000633f40) Create stream
I0826 17:15:31.963112       7 log.go:172] (0xc001db0630) (0xc000633f40) Stream added, broadcasting: 5
I0826 17:15:31.963947       7 log.go:172] (0xc001db0630) Reply frame received for 5
I0826 17:15:32.030455       7 log.go:172] (0xc001db0630) Data frame received for 5
I0826 17:15:32.030497       7 log.go:172] (0xc000633f40) (5) Data frame handling
I0826 17:15:32.030518       7 log.go:172] (0xc001db0630) Data frame received for 3
I0826 17:15:32.030530       7 log.go:172] (0xc001518d20) (3) Data frame handling
I0826 17:15:32.030548       7 log.go:172] (0xc001518d20) (3) Data frame sent
I0826 17:15:32.030563       7 log.go:172] (0xc001db0630) Data frame received for 3
I0826 17:15:32.030573       7 log.go:172] (0xc001518d20) (3) Data frame handling
I0826 17:15:32.031681       7 log.go:172] (0xc001db0630) Data frame received for 1
I0826 17:15:32.031696       7 log.go:172] (0xc001415860) (1) Data frame handling
I0826 17:15:32.031703       7 log.go:172] (0xc001415860) (1) Data frame sent
I0826 17:15:32.031717       7 log.go:172] (0xc001db0630) (0xc001415860) Stream removed, broadcasting: 1
I0826 17:15:32.031737       7 log.go:172] (0xc001db0630) Go away received
I0826 17:15:32.031795       7 log.go:172] (0xc001db0630) (0xc001415860) Stream removed, broadcasting: 1
I0826 17:15:32.031818       7 log.go:172] (0xc001db0630) (0xc001518d20) Stream removed, broadcasting: 3
I0826 17:15:32.031833       7 log.go:172] (0xc001db0630) (0xc000633f40) Stream removed, broadcasting: 5
Aug 26 17:15:32.031: INFO: Exec stderr: ""
Aug 26 17:15:32.031: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7399 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 17:15:32.031: INFO: >>> kubeConfig: /root/.kube/config
I0826 17:15:32.136296       7 log.go:172] (0xc0030d5970) (0xc0011846e0) Create stream
I0826 17:15:32.136327       7 log.go:172] (0xc0030d5970) (0xc0011846e0) Stream added, broadcasting: 1
I0826 17:15:32.138065       7 log.go:172] (0xc0030d5970) Reply frame received for 1
I0826 17:15:32.138112       7 log.go:172] (0xc0030d5970) (0xc001184f00) Create stream
I0826 17:15:32.138128       7 log.go:172] (0xc0030d5970) (0xc001184f00) Stream added, broadcasting: 3
I0826 17:15:32.138989       7 log.go:172] (0xc0030d5970) Reply frame received for 3
I0826 17:15:32.139028       7 log.go:172] (0xc0030d5970) (0xc001185220) Create stream
I0826 17:15:32.139042       7 log.go:172] (0xc0030d5970) (0xc001185220) Stream added, broadcasting: 5
I0826 17:15:32.139834       7 log.go:172] (0xc0030d5970) Reply frame received for 5
I0826 17:15:32.210281       7 log.go:172] (0xc0030d5970) Data frame received for 5
I0826 17:15:32.210308       7 log.go:172] (0xc001185220) (5) Data frame handling
I0826 17:15:32.210345       7 log.go:172] (0xc0030d5970) Data frame received for 3
I0826 17:15:32.210372       7 log.go:172] (0xc001184f00) (3) Data frame handling
I0826 17:15:32.210409       7 log.go:172] (0xc001184f00) (3) Data frame sent
I0826 17:15:32.210425       7 log.go:172] (0xc0030d5970) Data frame received for 3
I0826 17:15:32.210435       7 log.go:172] (0xc001184f00) (3) Data frame handling
I0826 17:15:32.211659       7 log.go:172] (0xc0030d5970) Data frame received for 1
I0826 17:15:32.211673       7 log.go:172] (0xc0011846e0) (1) Data frame handling
I0826 17:15:32.211681       7 log.go:172] (0xc0011846e0) (1) Data frame sent
I0826 17:15:32.211689       7 log.go:172] (0xc0030d5970) (0xc0011846e0) Stream removed, broadcasting: 1
I0826 17:15:32.211707       7 log.go:172] (0xc0030d5970) Go away received
I0826 17:15:32.211799       7 log.go:172] (0xc0030d5970) (0xc0011846e0) Stream removed, broadcasting: 1
I0826 17:15:32.211819       7 log.go:172] (0xc0030d5970) (0xc001184f00) Stream removed, broadcasting: 3
I0826 17:15:32.211836       7 log.go:172] (0xc0030d5970) (0xc001185220) Stream removed, broadcasting: 5
Aug 26 17:15:32.211: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 26 17:15:32.211: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7399 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 17:15:32.211: INFO: >>> kubeConfig: /root/.kube/config
I0826 17:15:32.859249       7 log.go:172] (0xc002984000) (0xc001840000) Create stream
I0826 17:15:32.859325       7 log.go:172] (0xc002984000) (0xc001840000) Stream added, broadcasting: 1
I0826 17:15:32.862557       7 log.go:172] (0xc002984000) Reply frame received for 1
I0826 17:15:32.862632       7 log.go:172] (0xc002984000) (0xc001b24140) Create stream
I0826 17:15:32.862657       7 log.go:172] (0xc002984000) (0xc001b24140) Stream added, broadcasting: 3
I0826 17:15:32.863909       7 log.go:172] (0xc002984000) Reply frame received for 3
I0826 17:15:32.863951       7 log.go:172] (0xc002984000) (0xc0018400a0) Create stream
I0826 17:15:32.863969       7 log.go:172] (0xc002984000) (0xc0018400a0) Stream added, broadcasting: 5
I0826 17:15:32.864986       7 log.go:172] (0xc002984000) Reply frame received for 5
I0826 17:15:32.938665       7 log.go:172] (0xc002984000) Data frame received for 5
I0826 17:15:32.938690       7 log.go:172] (0xc0018400a0) (5) Data frame handling
I0826 17:15:32.938711       7 log.go:172] (0xc002984000) Data frame received for 3
I0826 17:15:32.938718       7 log.go:172] (0xc001b24140) (3) Data frame handling
I0826 17:15:32.938731       7 log.go:172] (0xc001b24140) (3) Data frame sent
I0826 17:15:32.938737       7 log.go:172] (0xc002984000) Data frame received for 3
I0826 17:15:32.938752       7 log.go:172] (0xc001b24140) (3) Data frame handling
I0826 17:15:32.943653       7 log.go:172] (0xc002984000) Data frame received for 1
I0826 17:15:32.943675       7 log.go:172] (0xc001840000) (1) Data frame handling
I0826 17:15:32.943687       7 log.go:172] (0xc001840000) (1) Data frame sent
I0826 17:15:32.943701       7 log.go:172] (0xc002984000) (0xc001840000) Stream removed, broadcasting: 1
I0826 17:15:32.943717       7 log.go:172] (0xc002984000) Go away received
I0826 17:15:32.943861       7 log.go:172] (0xc002984000) (0xc001840000) Stream removed, broadcasting: 1
I0826 17:15:32.943882       7 log.go:172] (0xc002984000) (0xc001b24140) Stream removed, broadcasting: 3
I0826 17:15:32.943902       7 log.go:172] (0xc002984000) (0xc0018400a0) Stream removed, broadcasting: 5
Aug 26 17:15:32.943: INFO: Exec stderr: ""
Aug 26 17:15:32.943: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7399 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 17:15:32.943: INFO: >>> kubeConfig: /root/.kube/config
I0826 17:15:32.987162       7 log.go:172] (0xc001ca2370) (0xc001840640) Create stream
I0826 17:15:32.987201       7 log.go:172] (0xc001ca2370) (0xc001840640) Stream added, broadcasting: 1
I0826 17:15:32.989182       7 log.go:172] (0xc001ca2370) Reply frame received for 1
I0826 17:15:32.989222       7 log.go:172] (0xc001ca2370) (0xc001840820) Create stream
I0826 17:15:32.989239       7 log.go:172] (0xc001ca2370) (0xc001840820) Stream added, broadcasting: 3
I0826 17:15:32.990476       7 log.go:172] (0xc001ca2370) Reply frame received for 3
I0826 17:15:32.990517       7 log.go:172] (0xc001ca2370) (0xc0011e0b40) Create stream
I0826 17:15:32.990536       7 log.go:172] (0xc001ca2370) (0xc0011e0b40) Stream added, broadcasting: 5
I0826 17:15:32.992222       7 log.go:172] (0xc001ca2370) Reply frame received for 5
I0826 17:15:33.064280       7 log.go:172] (0xc001ca2370) Data frame received for 3
I0826 17:15:33.064360       7 log.go:172] (0xc001840820) (3) Data frame handling
I0826 17:15:33.064445       7 log.go:172] (0xc001840820) (3) Data frame sent
I0826 17:15:33.064475       7 log.go:172] (0xc001ca2370) Data frame received for 3
I0826 17:15:33.064496       7 log.go:172] (0xc001840820) (3) Data frame handling
I0826 17:15:33.064521       7 log.go:172] (0xc001ca2370) Data frame received for 5
I0826 17:15:33.064535       7 log.go:172] (0xc0011e0b40) (5) Data frame handling
I0826 17:15:33.065522       7 log.go:172] (0xc001ca2370) Data frame received for 1
I0826 17:15:33.065543       7 log.go:172] (0xc001840640) (1) Data frame handling
I0826 17:15:33.065560       7 log.go:172] (0xc001840640) (1) Data frame sent
I0826 17:15:33.065576       7 log.go:172] (0xc001ca2370) (0xc001840640) Stream removed, broadcasting: 1
I0826 17:15:33.065613       7 log.go:172] (0xc001ca2370) Go away received
I0826 17:15:33.065634       7 log.go:172] (0xc001ca2370) (0xc001840640) Stream removed, broadcasting: 1
I0826 17:15:33.065646       7 log.go:172] (0xc001ca2370) (0xc001840820) Stream removed, broadcasting: 3
I0826 17:15:33.065657       7 log.go:172] (0xc001ca2370) (0xc0011e0b40) Stream removed, broadcasting: 5
Aug 26 17:15:33.065: INFO: Exec stderr: ""
Aug 26 17:15:33.065: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7399 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 17:15:33.065: INFO: >>> kubeConfig: /root/.kube/config
I0826 17:15:33.199926       7 log.go:172] (0xc0058b36b0) (0xc001519040) Create stream
I0826 17:15:33.199948       7 log.go:172] (0xc0058b36b0) (0xc001519040) Stream added, broadcasting: 1
I0826 17:15:33.201241       7 log.go:172] (0xc0058b36b0) Reply frame received for 1
I0826 17:15:33.201261       7 log.go:172] (0xc0058b36b0) (0xc0018408c0) Create stream
I0826 17:15:33.201269       7 log.go:172] (0xc0058b36b0) (0xc0018408c0) Stream added, broadcasting: 3
I0826 17:15:33.201785       7 log.go:172] (0xc0058b36b0) Reply frame received for 3
I0826 17:15:33.201806       7 log.go:172] (0xc0058b36b0) (0xc001519540) Create stream
I0826 17:15:33.201815       7 log.go:172] (0xc0058b36b0) (0xc001519540) Stream added, broadcasting: 5
I0826 17:15:33.202424       7 log.go:172] (0xc0058b36b0) Reply frame received for 5
I0826 17:15:33.265024       7 log.go:172] (0xc0058b36b0) Data frame received for 3
I0826 17:15:33.265061       7 log.go:172] (0xc0018408c0) (3) Data frame handling
I0826 17:15:33.265073       7 log.go:172] (0xc0018408c0) (3) Data frame sent
I0826 17:15:33.265081       7 log.go:172] (0xc0058b36b0) Data frame received for 3
I0826 17:15:33.265110       7 log.go:172] (0xc0058b36b0) Data frame received for 5
I0826 17:15:33.265140       7 log.go:172] (0xc001519540) (5) Data frame handling
I0826 17:15:33.265158       7 log.go:172] (0xc0018408c0) (3) Data frame handling
I0826 17:15:33.266044       7 log.go:172] (0xc0058b36b0) Data frame received for 1
I0826 17:15:33.266091       7 log.go:172] (0xc001519040) (1) Data frame handling
I0826 17:15:33.266112       7 log.go:172] (0xc001519040) (1) Data frame sent
I0826 17:15:33.266134       7 log.go:172] (0xc0058b36b0) (0xc001519040) Stream removed, broadcasting: 1
I0826 17:15:33.266154       7 log.go:172] (0xc0058b36b0) Go away received
I0826 17:15:33.266339       7 log.go:172] (0xc0058b36b0) (0xc001519040) Stream removed, broadcasting: 1
I0826 17:15:33.266371       7 log.go:172] (0xc0058b36b0) (0xc0018408c0) Stream removed, broadcasting: 3
I0826 17:15:33.266395       7 log.go:172] (0xc0058b36b0) (0xc001519540) Stream removed, broadcasting: 5
Aug 26 17:15:33.266: INFO: Exec stderr: ""
Aug 26 17:15:33.266: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7399 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 17:15:33.266: INFO: >>> kubeConfig: /root/.kube/config
I0826 17:15:33.289925       7 log.go:172] (0xc001db09a0) (0xc001b24820) Create stream
I0826 17:15:33.289947       7 log.go:172] (0xc001db09a0) (0xc001b24820) Stream added, broadcasting: 1
I0826 17:15:33.291562       7 log.go:172] (0xc001db09a0) Reply frame received for 1
I0826 17:15:33.291595       7 log.go:172] (0xc001db09a0) (0xc001b24c80) Create stream
I0826 17:15:33.291606       7 log.go:172] (0xc001db09a0) (0xc001b24c80) Stream added, broadcasting: 3
I0826 17:15:33.292576       7 log.go:172] (0xc001db09a0) Reply frame received for 3
I0826 17:15:33.292633       7 log.go:172] (0xc001db09a0) (0xc0011e0e60) Create stream
I0826 17:15:33.292658       7 log.go:172] (0xc001db09a0) (0xc0011e0e60) Stream added, broadcasting: 5
I0826 17:15:33.293591       7 log.go:172] (0xc001db09a0) Reply frame received for 5
I0826 17:15:33.344156       7 log.go:172] (0xc001db09a0) Data frame received for 3
I0826 17:15:33.344184       7 log.go:172] (0xc001b24c80) (3) Data frame handling
I0826 17:15:33.344206       7 log.go:172] (0xc001b24c80) (3) Data frame sent
I0826 17:15:33.344221       7 log.go:172] (0xc001db09a0) Data frame received for 3
I0826 17:15:33.344229       7 log.go:172] (0xc001b24c80) (3) Data frame handling
I0826 17:15:33.344254       7 log.go:172] (0xc001db09a0) Data frame received for 5
I0826 17:15:33.344267       7 log.go:172] (0xc0011e0e60) (5) Data frame handling
I0826 17:15:33.345219       7 log.go:172] (0xc001db09a0) Data frame received for 1
I0826 17:15:33.345234       7 log.go:172] (0xc001b24820) (1) Data frame handling
I0826 17:15:33.345247       7 log.go:172] (0xc001b24820) (1) Data frame sent
I0826 17:15:33.345258       7 log.go:172] (0xc001db09a0) (0xc001b24820) Stream removed, broadcasting: 1
I0826 17:15:33.345267       7 log.go:172] (0xc001db09a0) Go away received
I0826 17:15:33.345401       7 log.go:172] (0xc001db09a0) (0xc001b24820) Stream removed, broadcasting: 1
I0826 17:15:33.345416       7 log.go:172] (0xc001db09a0) (0xc001b24c80) Stream removed, broadcasting: 3
I0826 17:15:33.345427       7 log.go:172] (0xc001db09a0) (0xc0011e0e60) Stream removed, broadcasting: 5
Aug 26 17:15:33.345: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:15:33.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-7399" for this suite.

• [SLOW TEST:33.575 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2521,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:15:33.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6503.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6503.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6503.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6503.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 17:15:44.884: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-6503.svc.cluster.local from pod dns-6503/dns-test-c26c2864-5bc2-402d-9520-b4e446922ce3: Get https://172.30.12.66:44383/api/v1/namespaces/dns-6503/pods/dns-test-c26c2864-5bc2-402d-9520-b4e446922ce3/proxy/results/wheezy_udp@dns-test-service-3.dns-6503.svc.cluster.local: stream error: stream ID 7975; INTERNAL_ERROR
Aug 26 17:15:45.108: INFO: File jessie_udp@dns-test-service-3.dns-6503.svc.cluster.local from pod  dns-6503/dns-test-c26c2864-5bc2-402d-9520-b4e446922ce3 contains '' instead of 'foo.example.com.'
Aug 26 17:15:45.108: INFO: Lookups using dns-6503/dns-test-c26c2864-5bc2-402d-9520-b4e446922ce3 failed for: [wheezy_udp@dns-test-service-3.dns-6503.svc.cluster.local jessie_udp@dns-test-service-3.dns-6503.svc.cluster.local]

Aug 26 17:15:50.136: INFO: DNS probes using dns-test-c26c2864-5bc2-402d-9520-b4e446922ce3 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6503.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6503.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6503.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6503.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 17:16:16.152: INFO: DNS probes using dns-test-6ff98e6e-c543-489d-93b7-962a0660f298 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6503.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6503.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6503.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6503.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 17:16:46.334: INFO: DNS probes using dns-test-1a9fd8d8-84f6-4b0d-b1df-0adf03c05dda succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:16:46.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6503" for this suite.

• [SLOW TEST:73.731 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":146,"skipped":2551,"failed":0}
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:16:47.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9585.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9585.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9585.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9585.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9585.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9585.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9585.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9585.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9585.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9585.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9585.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 199.143.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.143.199_udp@PTR;check="$$(dig +tcp +noall +answer +search 199.143.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.143.199_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9585.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9585.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9585.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9585.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9585.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9585.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9585.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9585.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9585.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9585.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9585.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 199.143.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.143.199_udp@PTR;check="$$(dig +tcp +noall +answer +search 199.143.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.143.199_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 17:17:02.836: INFO: Unable to read wheezy_udp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:02.884: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:02.905: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:02.909: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:02.989: INFO: Unable to read jessie_udp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:02.993: INFO: Unable to read jessie_tcp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:02.995: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:02.998: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:03.011: INFO: Lookups using dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43 failed for: [wheezy_udp@dns-test-service.dns-9585.svc.cluster.local wheezy_tcp@dns-test-service.dns-9585.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local jessie_udp@dns-test-service.dns-9585.svc.cluster.local jessie_tcp@dns-test-service.dns-9585.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local]

Aug 26 17:17:08.016: INFO: Unable to read wheezy_udp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:08.019: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:08.022: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:08.025: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:08.044: INFO: Unable to read jessie_udp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:08.047: INFO: Unable to read jessie_tcp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:08.050: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:08.052: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:08.069: INFO: Lookups using dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43 failed for: [wheezy_udp@dns-test-service.dns-9585.svc.cluster.local wheezy_tcp@dns-test-service.dns-9585.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local jessie_udp@dns-test-service.dns-9585.svc.cluster.local jessie_tcp@dns-test-service.dns-9585.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local]

Aug 26 17:17:13.267: INFO: Unable to read wheezy_udp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:13.271: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:14.705: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:14.744: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:14.768: INFO: Unable to read jessie_udp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:14.771: INFO: Unable to read jessie_tcp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:14.774: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:14.777: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:14.791: INFO: Lookups using dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43 failed for: [wheezy_udp@dns-test-service.dns-9585.svc.cluster.local wheezy_tcp@dns-test-service.dns-9585.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local jessie_udp@dns-test-service.dns-9585.svc.cluster.local jessie_tcp@dns-test-service.dns-9585.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local]

Aug 26 17:17:18.016: INFO: Unable to read wheezy_udp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:18.019: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:18.022: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:18.024: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:18.044: INFO: Unable to read jessie_udp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:18.046: INFO: Unable to read jessie_tcp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:18.049: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:18.052: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:18.068: INFO: Lookups using dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43 failed for: [wheezy_udp@dns-test-service.dns-9585.svc.cluster.local wheezy_tcp@dns-test-service.dns-9585.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local jessie_udp@dns-test-service.dns-9585.svc.cluster.local jessie_tcp@dns-test-service.dns-9585.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local]

Aug 26 17:17:23.017: INFO: Unable to read wheezy_udp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:23.021: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:23.025: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:23.028: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:23.052: INFO: Unable to read jessie_udp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:23.055: INFO: Unable to read jessie_tcp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:23.058: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:23.060: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:23.078: INFO: Lookups using dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43 failed for: [wheezy_udp@dns-test-service.dns-9585.svc.cluster.local wheezy_tcp@dns-test-service.dns-9585.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local jessie_udp@dns-test-service.dns-9585.svc.cluster.local jessie_tcp@dns-test-service.dns-9585.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local]

Aug 26 17:17:28.016: INFO: Unable to read wheezy_udp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:28.020: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:28.024: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:28.027: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:28.051: INFO: Unable to read jessie_udp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:28.053: INFO: Unable to read jessie_tcp@dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:28.056: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:28.058: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local from pod dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43: the server could not find the requested resource (get pods dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43)
Aug 26 17:17:28.075: INFO: Lookups using dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43 failed for: [wheezy_udp@dns-test-service.dns-9585.svc.cluster.local wheezy_tcp@dns-test-service.dns-9585.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local jessie_udp@dns-test-service.dns-9585.svc.cluster.local jessie_tcp@dns-test-service.dns-9585.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9585.svc.cluster.local]

Aug 26 17:17:33.107: INFO: DNS probes using dns-9585/dns-test-f352869c-dcea-4b04-a42d-9b3a29170a43 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:17:34.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9585" for this suite.

• [SLOW TEST:47.499 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":275,"completed":147,"skipped":2551,"failed":0}
SSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:17:34.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:17:34.774: INFO: Creating deployment "test-recreate-deployment"
Aug 26 17:17:34.810: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 26 17:17:34.918: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 26 17:17:36.925: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 26 17:17:36.927: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059054, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059054, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059055, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059054, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:17:39.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059054, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059054, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059055, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059054, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:17:41.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059054, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059054, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059055, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059054, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:17:43.283: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059054, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059054, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059055, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059054, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:17:44.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059054, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059054, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059055, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059054, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:17:46.930: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 26 17:17:46.937: INFO: Updating deployment test-recreate-deployment
Aug 26 17:17:46.937: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 26 17:17:50.291: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-7851 /apis/apps/v1/namespaces/deployment-7851/deployments/test-recreate-deployment 4e9ceff9-5135-417e-ae18-eac68ed50a94 1108252 2 2020-08-26 17:17:34 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-08-26 17:17:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-26 17:17:49 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00485ca58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-26 17:17:49 +0000 UTC,LastTransitionTime:2020-08-26 17:17:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-08-26 17:17:49 +0000 UTC,LastTransitionTime:2020-08-26 17:17:34 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Aug 26 17:17:50.295: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-7851 /apis/apps/v1/namespaces/deployment-7851/replicasets/test-recreate-deployment-d5667d9c7 d4a5c3b0-6b24-4a4a-9faf-3018373ec239 1108251 1 2020-08-26 17:17:48 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 4e9ceff9-5135-417e-ae18-eac68ed50a94 0xc0060cd5c0 0xc0060cd5c1}] []  [{kube-controller-manager Update apps/v1 2020-08-26 17:17:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 101 57 99 101 102 102 57 45 53 49 51 53 45 52 49 55 101 45 97 101 49 56 45 101 97 99 54 56 101 100 53 48 97 57 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0060cd638  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 26 17:17:50.295: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 26 17:17:50.295: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c  deployment-7851 /apis/apps/v1/namespaces/deployment-7851/replicasets/test-recreate-deployment-74d98b5f7c b0889af9-0a2b-4bce-a7b0-20adb6999aa5 1108238 2 2020-08-26 17:17:34 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 4e9ceff9-5135-417e-ae18-eac68ed50a94 0xc0060cd4b7 0xc0060cd4b8}] []  [{kube-controller-manager Update apps/v1 2020-08-26 17:17:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 101 57 99 101 102 102 57 45 53 49 51 53 45 52 49 55 101 45 97 101 49 56 45 101 97 99 54 56 101 100 53 48 97 57 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0060cd558  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 26 17:17:50.529: INFO: Pod "test-recreate-deployment-d5667d9c7-djctn" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-djctn test-recreate-deployment-d5667d9c7- deployment-7851 /api/v1/namespaces/deployment-7851/pods/test-recreate-deployment-d5667d9c7-djctn 1428b55c-8b26-4047-9533-89c93022a037 1108253 0 2020-08-26 17:17:48 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 d4a5c3b0-6b24-4a4a-9faf-3018373ec239 0xc0060cdbc0 0xc0060cdbc1}] []  [{kube-controller-manager Update v1 2020-08-26 17:17:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 97 53 99 51 98 48 45 54 98 50 52 45 52 97 52 97 45 57 102 97 102 45 51 48 49 56 51 55 51 101 99 50 51 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:17:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nvz69,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nvz69,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nvz69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:17:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:17:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:17:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:17:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-26 17:17:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:17:50.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7851" for this suite.

• [SLOW TEST:16.005 seconds]
[sig-apps] Deployment
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":148,"skipped":2557,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:17:50.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 26 17:17:52.621: INFO: Waiting up to 5m0s for pod "downwardapi-volume-420e29d4-e4d8-4e0b-bacd-e5fa9c3e03ff" in namespace "downward-api-4340" to be "Succeeded or Failed"
Aug 26 17:17:53.049: INFO: Pod "downwardapi-volume-420e29d4-e4d8-4e0b-bacd-e5fa9c3e03ff": Phase="Pending", Reason="", readiness=false. Elapsed: 427.586066ms
Aug 26 17:17:55.173: INFO: Pod "downwardapi-volume-420e29d4-e4d8-4e0b-bacd-e5fa9c3e03ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.55186729s
Aug 26 17:17:57.599: INFO: Pod "downwardapi-volume-420e29d4-e4d8-4e0b-bacd-e5fa9c3e03ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.978351007s
Aug 26 17:18:00.048: INFO: Pod "downwardapi-volume-420e29d4-e4d8-4e0b-bacd-e5fa9c3e03ff": Phase="Pending", Reason="", readiness=false. Elapsed: 7.426583511s
Aug 26 17:18:02.662: INFO: Pod "downwardapi-volume-420e29d4-e4d8-4e0b-bacd-e5fa9c3e03ff": Phase="Pending", Reason="", readiness=false. Elapsed: 10.040484956s
Aug 26 17:18:05.224: INFO: Pod "downwardapi-volume-420e29d4-e4d8-4e0b-bacd-e5fa9c3e03ff": Phase="Running", Reason="", readiness=true. Elapsed: 12.602433383s
Aug 26 17:18:07.272: INFO: Pod "downwardapi-volume-420e29d4-e4d8-4e0b-bacd-e5fa9c3e03ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.650476946s
STEP: Saw pod success
Aug 26 17:18:07.272: INFO: Pod "downwardapi-volume-420e29d4-e4d8-4e0b-bacd-e5fa9c3e03ff" satisfied condition "Succeeded or Failed"
Aug 26 17:18:07.277: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-420e29d4-e4d8-4e0b-bacd-e5fa9c3e03ff container client-container: 
STEP: delete the pod
Aug 26 17:18:09.070: INFO: Waiting for pod downwardapi-volume-420e29d4-e4d8-4e0b-bacd-e5fa9c3e03ff to disappear
Aug 26 17:18:10.113: INFO: Pod downwardapi-volume-420e29d4-e4d8-4e0b-bacd-e5fa9c3e03ff no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:18:10.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4340" for this suite.

• [SLOW TEST:19.850 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2602,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:18:10.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-35c1289a-e9a3-4bf8-86fa-dba2e755b817
STEP: Creating a pod to test consume secrets
Aug 26 17:18:11.387: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-df333c7f-06ce-4cf4-9ba2-8a6746c6a192" in namespace "projected-6656" to be "Succeeded or Failed"
Aug 26 17:18:11.476: INFO: Pod "pod-projected-secrets-df333c7f-06ce-4cf4-9ba2-8a6746c6a192": Phase="Pending", Reason="", readiness=false. Elapsed: 88.669635ms
Aug 26 17:18:13.697: INFO: Pod "pod-projected-secrets-df333c7f-06ce-4cf4-9ba2-8a6746c6a192": Phase="Pending", Reason="", readiness=false. Elapsed: 2.309599711s
Aug 26 17:18:16.502: INFO: Pod "pod-projected-secrets-df333c7f-06ce-4cf4-9ba2-8a6746c6a192": Phase="Pending", Reason="", readiness=false. Elapsed: 5.114687687s
Aug 26 17:18:18.731: INFO: Pod "pod-projected-secrets-df333c7f-06ce-4cf4-9ba2-8a6746c6a192": Phase="Pending", Reason="", readiness=false. Elapsed: 7.344227601s
Aug 26 17:18:20.825: INFO: Pod "pod-projected-secrets-df333c7f-06ce-4cf4-9ba2-8a6746c6a192": Phase="Running", Reason="", readiness=true. Elapsed: 9.437828044s
Aug 26 17:18:22.829: INFO: Pod "pod-projected-secrets-df333c7f-06ce-4cf4-9ba2-8a6746c6a192": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.442161905s
STEP: Saw pod success
Aug 26 17:18:22.829: INFO: Pod "pod-projected-secrets-df333c7f-06ce-4cf4-9ba2-8a6746c6a192" satisfied condition "Succeeded or Failed"
Aug 26 17:18:22.878: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-df333c7f-06ce-4cf4-9ba2-8a6746c6a192 container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 17:18:22.933: INFO: Waiting for pod pod-projected-secrets-df333c7f-06ce-4cf4-9ba2-8a6746c6a192 to disappear
Aug 26 17:18:22.941: INFO: Pod pod-projected-secrets-df333c7f-06ce-4cf4-9ba2-8a6746c6a192 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:18:22.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6656" for this suite.

• [SLOW TEST:12.561 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2604,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:18:23.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:18:23.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 26 17:18:26.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1285 create -f -'
Aug 26 17:18:41.365: INFO: stderr: ""
Aug 26 17:18:41.365: INFO: stdout: "e2e-test-crd-publish-openapi-1626-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 26 17:18:41.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1285 delete e2e-test-crd-publish-openapi-1626-crds test-cr'
Aug 26 17:18:41.902: INFO: stderr: ""
Aug 26 17:18:41.902: INFO: stdout: "e2e-test-crd-publish-openapi-1626-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Aug 26 17:18:41.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1285 apply -f -'
Aug 26 17:18:42.442: INFO: stderr: ""
Aug 26 17:18:42.442: INFO: stdout: "e2e-test-crd-publish-openapi-1626-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Aug 26 17:18:42.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1285 delete e2e-test-crd-publish-openapi-1626-crds test-cr'
Aug 26 17:18:42.689: INFO: stderr: ""
Aug 26 17:18:42.689: INFO: stdout: "e2e-test-crd-publish-openapi-1626-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Aug 26 17:18:42.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1626-crds'
Aug 26 17:18:43.144: INFO: stderr: ""
Aug 26 17:18:43.144: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1626-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:18:46.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1285" for this suite.

• [SLOW TEST:23.120 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":151,"skipped":2690,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:18:46.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-b7cdaca1-6096-45d6-8d80-43b2338daedb
STEP: Creating a pod to test consume secrets
Aug 26 17:18:46.881: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dced6332-f540-4a27-a032-f99df34126fe" in namespace "projected-8100" to be "Succeeded or Failed"
Aug 26 17:18:46.962: INFO: Pod "pod-projected-secrets-dced6332-f540-4a27-a032-f99df34126fe": Phase="Pending", Reason="", readiness=false. Elapsed: 81.32161ms
Aug 26 17:18:49.207: INFO: Pod "pod-projected-secrets-dced6332-f540-4a27-a032-f99df34126fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325974617s
Aug 26 17:18:51.467: INFO: Pod "pod-projected-secrets-dced6332-f540-4a27-a032-f99df34126fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.585664815s
Aug 26 17:18:53.575: INFO: Pod "pod-projected-secrets-dced6332-f540-4a27-a032-f99df34126fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.694347795s
Aug 26 17:18:55.658: INFO: Pod "pod-projected-secrets-dced6332-f540-4a27-a032-f99df34126fe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.776896923s
Aug 26 17:18:57.991: INFO: Pod "pod-projected-secrets-dced6332-f540-4a27-a032-f99df34126fe": Phase="Running", Reason="", readiness=true. Elapsed: 11.110263394s
Aug 26 17:18:59.995: INFO: Pod "pod-projected-secrets-dced6332-f540-4a27-a032-f99df34126fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.114066672s
STEP: Saw pod success
Aug 26 17:18:59.995: INFO: Pod "pod-projected-secrets-dced6332-f540-4a27-a032-f99df34126fe" satisfied condition "Succeeded or Failed"
Aug 26 17:18:59.998: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-dced6332-f540-4a27-a032-f99df34126fe container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 17:19:00.079: INFO: Waiting for pod pod-projected-secrets-dced6332-f540-4a27-a032-f99df34126fe to disappear
Aug 26 17:19:00.104: INFO: Pod pod-projected-secrets-dced6332-f540-4a27-a032-f99df34126fe no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:19:00.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8100" for this suite.

• [SLOW TEST:14.091 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2729,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:19:00.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-15288419-2928-4003-a0a2-2535506e7ae0
STEP: Creating a pod to test consume secrets
Aug 26 17:19:00.453: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9b561157-5861-42b6-8315-7382bcc3223c" in namespace "projected-9102" to be "Succeeded or Failed"
Aug 26 17:19:00.536: INFO: Pod "pod-projected-secrets-9b561157-5861-42b6-8315-7382bcc3223c": Phase="Pending", Reason="", readiness=false. Elapsed: 82.189611ms
Aug 26 17:19:02.602: INFO: Pod "pod-projected-secrets-9b561157-5861-42b6-8315-7382bcc3223c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148798222s
Aug 26 17:19:04.607: INFO: Pod "pod-projected-secrets-9b561157-5861-42b6-8315-7382bcc3223c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153402628s
Aug 26 17:19:06.686: INFO: Pod "pod-projected-secrets-9b561157-5861-42b6-8315-7382bcc3223c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.232541179s
Aug 26 17:19:08.769: INFO: Pod "pod-projected-secrets-9b561157-5861-42b6-8315-7382bcc3223c": Phase="Running", Reason="", readiness=true. Elapsed: 8.315486734s
Aug 26 17:19:11.457: INFO: Pod "pod-projected-secrets-9b561157-5861-42b6-8315-7382bcc3223c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.003124413s
STEP: Saw pod success
Aug 26 17:19:11.457: INFO: Pod "pod-projected-secrets-9b561157-5861-42b6-8315-7382bcc3223c" satisfied condition "Succeeded or Failed"
Aug 26 17:19:11.459: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-9b561157-5861-42b6-8315-7382bcc3223c container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 17:19:12.359: INFO: Waiting for pod pod-projected-secrets-9b561157-5861-42b6-8315-7382bcc3223c to disappear
Aug 26 17:19:12.604: INFO: Pod pod-projected-secrets-9b561157-5861-42b6-8315-7382bcc3223c no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:19:12.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9102" for this suite.

• [SLOW TEST:12.891 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2730,"failed":0}
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:19:13.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-6068abfe-0240-41e1-a240-5ff1ef543728
STEP: Creating a pod to test consume configMaps
Aug 26 17:19:14.920: INFO: Waiting up to 5m0s for pod "pod-configmaps-fc3f8cfa-7dcb-4876-8064-a3c2d5083766" in namespace "configmap-2137" to be "Succeeded or Failed"
Aug 26 17:19:15.069: INFO: Pod "pod-configmaps-fc3f8cfa-7dcb-4876-8064-a3c2d5083766": Phase="Pending", Reason="", readiness=false. Elapsed: 148.862411ms
Aug 26 17:19:17.361: INFO: Pod "pod-configmaps-fc3f8cfa-7dcb-4876-8064-a3c2d5083766": Phase="Pending", Reason="", readiness=false. Elapsed: 2.440353192s
Aug 26 17:19:19.461: INFO: Pod "pod-configmaps-fc3f8cfa-7dcb-4876-8064-a3c2d5083766": Phase="Pending", Reason="", readiness=false. Elapsed: 4.540389532s
Aug 26 17:19:21.604: INFO: Pod "pod-configmaps-fc3f8cfa-7dcb-4876-8064-a3c2d5083766": Phase="Pending", Reason="", readiness=false. Elapsed: 6.683961307s
Aug 26 17:19:23.883: INFO: Pod "pod-configmaps-fc3f8cfa-7dcb-4876-8064-a3c2d5083766": Phase="Pending", Reason="", readiness=false. Elapsed: 8.962178368s
Aug 26 17:19:26.102: INFO: Pod "pod-configmaps-fc3f8cfa-7dcb-4876-8064-a3c2d5083766": Phase="Running", Reason="", readiness=true. Elapsed: 11.181833866s
Aug 26 17:19:28.285: INFO: Pod "pod-configmaps-fc3f8cfa-7dcb-4876-8064-a3c2d5083766": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.364222174s
STEP: Saw pod success
Aug 26 17:19:28.285: INFO: Pod "pod-configmaps-fc3f8cfa-7dcb-4876-8064-a3c2d5083766" satisfied condition "Succeeded or Failed"
Aug 26 17:19:28.287: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-fc3f8cfa-7dcb-4876-8064-a3c2d5083766 container configmap-volume-test: 
STEP: delete the pod
Aug 26 17:19:29.335: INFO: Waiting for pod pod-configmaps-fc3f8cfa-7dcb-4876-8064-a3c2d5083766 to disappear
Aug 26 17:19:29.410: INFO: Pod pod-configmaps-fc3f8cfa-7dcb-4876-8064-a3c2d5083766 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:19:29.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2137" for this suite.

• [SLOW TEST:16.316 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2730,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:19:29.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 26 17:19:30.063: INFO: Waiting up to 5m0s for pod "downward-api-f926996a-1818-427b-a49e-4dabab26df19" in namespace "downward-api-4630" to be "Succeeded or Failed"
Aug 26 17:19:30.261: INFO: Pod "downward-api-f926996a-1818-427b-a49e-4dabab26df19": Phase="Pending", Reason="", readiness=false. Elapsed: 198.398322ms
Aug 26 17:19:32.273: INFO: Pod "downward-api-f926996a-1818-427b-a49e-4dabab26df19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20967622s
Aug 26 17:19:34.923: INFO: Pod "downward-api-f926996a-1818-427b-a49e-4dabab26df19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.860042507s
Aug 26 17:19:37.135: INFO: Pod "downward-api-f926996a-1818-427b-a49e-4dabab26df19": Phase="Pending", Reason="", readiness=false. Elapsed: 7.071692269s
Aug 26 17:19:39.335: INFO: Pod "downward-api-f926996a-1818-427b-a49e-4dabab26df19": Phase="Running", Reason="", readiness=true. Elapsed: 9.272116648s
Aug 26 17:19:41.615: INFO: Pod "downward-api-f926996a-1818-427b-a49e-4dabab26df19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.551653892s
STEP: Saw pod success
Aug 26 17:19:41.615: INFO: Pod "downward-api-f926996a-1818-427b-a49e-4dabab26df19" satisfied condition "Succeeded or Failed"
Aug 26 17:19:41.617: INFO: Trying to get logs from node kali-worker pod downward-api-f926996a-1818-427b-a49e-4dabab26df19 container dapi-container: 
STEP: delete the pod
Aug 26 17:19:42.750: INFO: Waiting for pod downward-api-f926996a-1818-427b-a49e-4dabab26df19 to disappear
Aug 26 17:19:43.130: INFO: Pod downward-api-f926996a-1818-427b-a49e-4dabab26df19 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:19:43.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4630" for this suite.

• [SLOW TEST:13.992 seconds]
[sig-node] Downward API
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2783,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:19:43.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:19:54.938: INFO: Waiting up to 5m0s for pod "client-envvars-75881d63-b0a7-46a4-acc9-9672f3d0ceda" in namespace "pods-9634" to be "Succeeded or Failed"
Aug 26 17:19:54.992: INFO: Pod "client-envvars-75881d63-b0a7-46a4-acc9-9672f3d0ceda": Phase="Pending", Reason="", readiness=false. Elapsed: 54.311325ms
Aug 26 17:19:56.997: INFO: Pod "client-envvars-75881d63-b0a7-46a4-acc9-9672f3d0ceda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059426027s
Aug 26 17:19:59.000: INFO: Pod "client-envvars-75881d63-b0a7-46a4-acc9-9672f3d0ceda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062230357s
Aug 26 17:20:01.004: INFO: Pod "client-envvars-75881d63-b0a7-46a4-acc9-9672f3d0ceda": Phase="Running", Reason="", readiness=true. Elapsed: 6.06679134s
Aug 26 17:20:03.009: INFO: Pod "client-envvars-75881d63-b0a7-46a4-acc9-9672f3d0ceda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070948024s
STEP: Saw pod success
Aug 26 17:20:03.009: INFO: Pod "client-envvars-75881d63-b0a7-46a4-acc9-9672f3d0ceda" satisfied condition "Succeeded or Failed"
Aug 26 17:20:03.012: INFO: Trying to get logs from node kali-worker pod client-envvars-75881d63-b0a7-46a4-acc9-9672f3d0ceda container env3cont: 
STEP: delete the pod
Aug 26 17:20:03.028: INFO: Waiting for pod client-envvars-75881d63-b0a7-46a4-acc9-9672f3d0ceda to disappear
Aug 26 17:20:03.052: INFO: Pod client-envvars-75881d63-b0a7-46a4-acc9-9672f3d0ceda no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:20:03.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9634" for this suite.

• [SLOW TEST:19.645 seconds]
[k8s.io] Pods
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2797,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:20:03.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:20:14.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8115" for this suite.

• [SLOW TEST:11.626 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":157,"skipped":2849,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:20:14.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 26 17:20:14.808: INFO: Waiting up to 5m0s for pod "pod-81171b5a-74b3-4247-b5c4-79f98ebe02e5" in namespace "emptydir-9734" to be "Succeeded or Failed"
Aug 26 17:20:14.825: INFO: Pod "pod-81171b5a-74b3-4247-b5c4-79f98ebe02e5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.692559ms
Aug 26 17:20:16.850: INFO: Pod "pod-81171b5a-74b3-4247-b5c4-79f98ebe02e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04220027s
Aug 26 17:20:18.926: INFO: Pod "pod-81171b5a-74b3-4247-b5c4-79f98ebe02e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117795706s
Aug 26 17:20:21.436: INFO: Pod "pod-81171b5a-74b3-4247-b5c4-79f98ebe02e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.627718301s
STEP: Saw pod success
Aug 26 17:20:21.436: INFO: Pod "pod-81171b5a-74b3-4247-b5c4-79f98ebe02e5" satisfied condition "Succeeded or Failed"
Aug 26 17:20:21.439: INFO: Trying to get logs from node kali-worker pod pod-81171b5a-74b3-4247-b5c4-79f98ebe02e5 container test-container: 
STEP: delete the pod
Aug 26 17:20:21.722: INFO: Waiting for pod pod-81171b5a-74b3-4247-b5c4-79f98ebe02e5 to disappear
Aug 26 17:20:21.764: INFO: Pod pod-81171b5a-74b3-4247-b5c4-79f98ebe02e5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:20:21.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9734" for this suite.

• [SLOW TEST:7.142 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":158,"skipped":2855,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:20:21.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-975185a3-5877-4679-bd20-ed678af8f672
STEP: Creating configMap with name cm-test-opt-upd-263e5999-34d5-4a9b-ad66-b15be781da47
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-975185a3-5877-4679-bd20-ed678af8f672
STEP: Updating configmap cm-test-opt-upd-263e5999-34d5-4a9b-ad66-b15be781da47
STEP: Creating configMap with name cm-test-opt-create-ab5f38b1-88e4-4aee-935d-3670b6fe340a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:20:37.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4348" for this suite.

• [SLOW TEST:15.255 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2861,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:20:37.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:20:37.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4434" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":275,"completed":160,"skipped":2871,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:20:37.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:20:37.325: INFO: Create a RollingUpdate DaemonSet
Aug 26 17:20:37.330: INFO: Check that daemon pods launch on every node of the cluster
Aug 26 17:20:37.342: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:20:37.346: INFO: Number of nodes with available pods: 0
Aug 26 17:20:37.346: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:20:38.352: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:20:38.356: INFO: Number of nodes with available pods: 0
Aug 26 17:20:38.356: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:20:39.574: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:20:39.577: INFO: Number of nodes with available pods: 0
Aug 26 17:20:39.577: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:20:40.351: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:20:40.354: INFO: Number of nodes with available pods: 0
Aug 26 17:20:40.354: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:20:41.366: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:20:41.594: INFO: Number of nodes with available pods: 0
Aug 26 17:20:41.594: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:20:42.415: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:20:42.420: INFO: Number of nodes with available pods: 0
Aug 26 17:20:42.420: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:20:43.516: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:20:43.544: INFO: Number of nodes with available pods: 0
Aug 26 17:20:43.544: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:20:44.460: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:20:44.466: INFO: Number of nodes with available pods: 0
Aug 26 17:20:44.466: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:20:45.371: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:20:45.374: INFO: Number of nodes with available pods: 1
Aug 26 17:20:45.374: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:20:46.494: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:20:46.571: INFO: Number of nodes with available pods: 2
Aug 26 17:20:46.571: INFO: Number of running nodes: 2, number of available pods: 2
Aug 26 17:20:46.571: INFO: Update the DaemonSet to trigger a rollout
Aug 26 17:20:46.651: INFO: Updating DaemonSet daemon-set
Aug 26 17:20:57.928: INFO: Roll back the DaemonSet before rollout is complete
Aug 26 17:20:57.935: INFO: Updating DaemonSet daemon-set
Aug 26 17:20:57.935: INFO: Make sure DaemonSet rollback is complete
Aug 26 17:20:57.973: INFO: Wrong image for pod: daemon-set-2c4bs. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 26 17:20:57.973: INFO: Pod daemon-set-2c4bs is not available
Aug 26 17:20:58.013: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:20:59.037: INFO: Wrong image for pod: daemon-set-2c4bs. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 26 17:20:59.037: INFO: Pod daemon-set-2c4bs is not available
Aug 26 17:20:59.041: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:21:00.016: INFO: Wrong image for pod: daemon-set-2c4bs. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 26 17:21:00.016: INFO: Pod daemon-set-2c4bs is not available
Aug 26 17:21:00.020: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:21:01.019: INFO: Pod daemon-set-q4hm5 is not available
Aug 26 17:21:01.023: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8131, will wait for the garbage collector to delete the pods
Aug 26 17:21:01.098: INFO: Deleting DaemonSet.extensions daemon-set took: 5.948011ms
Aug 26 17:21:01.398: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.258555ms
Aug 26 17:21:07.928: INFO: Number of nodes with available pods: 0
Aug 26 17:21:07.928: INFO: Number of running nodes: 0, number of available pods: 0
Aug 26 17:21:07.932: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8131/daemonsets","resourceVersion":"1109167"},"items":null}

Aug 26 17:21:07.935: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8131/pods","resourceVersion":"1109167"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:21:07.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8131" for this suite.

• [SLOW TEST:30.762 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":161,"skipped":2881,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:21:07.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 26 17:21:16.170: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 26 17:21:16.209: INFO: Pod pod-with-poststart-http-hook still exists
Aug 26 17:21:18.209: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 26 17:21:18.429: INFO: Pod pod-with-poststart-http-hook still exists
Aug 26 17:21:20.209: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 26 17:21:20.216: INFO: Pod pod-with-poststart-http-hook still exists
Aug 26 17:21:22.209: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 26 17:21:22.259: INFO: Pod pod-with-poststart-http-hook still exists
Aug 26 17:21:24.209: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 26 17:21:24.438: INFO: Pod pod-with-poststart-http-hook still exists
Aug 26 17:21:26.209: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 26 17:21:26.214: INFO: Pod pod-with-poststart-http-hook still exists
Aug 26 17:21:28.209: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 26 17:21:28.292: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:21:28.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9063" for this suite.

• [SLOW TEST:20.347 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2890,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:21:28.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-3e02896e-0dc3-4033-ad93-f06924fec52a
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-3e02896e-0dc3-4033-ad93-f06924fec52a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:22:58.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9096" for this suite.

• [SLOW TEST:90.380 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2916,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:22:58.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-e75342dd-522c-4dd5-95ae-6496d8bb5c6d
STEP: Creating a pod to test consume configMaps
Aug 26 17:22:58.780: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-93f6cbe9-0dc1-4599-a2d1-fe9aa9d754e7" in namespace "projected-2620" to be "Succeeded or Failed"
Aug 26 17:22:58.806: INFO: Pod "pod-projected-configmaps-93f6cbe9-0dc1-4599-a2d1-fe9aa9d754e7": Phase="Pending", Reason="", readiness=false. Elapsed: 25.368743ms
Aug 26 17:23:00.810: INFO: Pod "pod-projected-configmaps-93f6cbe9-0dc1-4599-a2d1-fe9aa9d754e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02939619s
Aug 26 17:23:02.813: INFO: Pod "pod-projected-configmaps-93f6cbe9-0dc1-4599-a2d1-fe9aa9d754e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032793096s
Aug 26 17:23:04.818: INFO: Pod "pod-projected-configmaps-93f6cbe9-0dc1-4599-a2d1-fe9aa9d754e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03741295s
STEP: Saw pod success
Aug 26 17:23:04.818: INFO: Pod "pod-projected-configmaps-93f6cbe9-0dc1-4599-a2d1-fe9aa9d754e7" satisfied condition "Succeeded or Failed"
Aug 26 17:23:04.821: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-93f6cbe9-0dc1-4599-a2d1-fe9aa9d754e7 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 17:23:04.879: INFO: Waiting for pod pod-projected-configmaps-93f6cbe9-0dc1-4599-a2d1-fe9aa9d754e7 to disappear
Aug 26 17:23:04.924: INFO: Pod pod-projected-configmaps-93f6cbe9-0dc1-4599-a2d1-fe9aa9d754e7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:23:04.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2620" for this suite.

• [SLOW TEST:6.436 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2923,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:23:05.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 26 17:23:09.996: INFO: Successfully updated pod "labelsupdate643aeb31-12c7-4993-bad7-146c5068b949"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:23:12.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8943" for this suite.

• [SLOW TEST:6.904 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2937,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:23:12.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Aug 26 17:23:12.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Aug 26 17:23:22.907: INFO: >>> kubeConfig: /root/.kube/config
Aug 26 17:23:25.865: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:23:35.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3250" for this suite.

• [SLOW TEST:24.049 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":166,"skipped":2951,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:23:36.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 26 17:23:41.230: INFO: Successfully updated pod "annotationupdate908ca543-4925-4594-b1ea-851d5e6a5369"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:23:43.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4630" for this suite.

• [SLOW TEST:7.198 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2959,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:23:43.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 26 17:23:47.890: INFO: Successfully updated pod "annotationupdate1ea10b1c-f050-426e-b4b9-37f6fe9a0f13"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:23:49.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6272" for this suite.

• [SLOW TEST:6.668 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":168,"skipped":2973,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:23:49.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 17:23:51.429: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 17:23:53.551: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059431, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059431, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059431, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059431, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:23:55.693: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059431, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059431, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059431, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059431, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 17:23:58.639: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:23:59.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9728" for this suite.
STEP: Destroying namespace "webhook-9728-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.664 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":169,"skipped":2983,"failed":0}
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:23:59.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-1544
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Aug 26 17:24:01.583: INFO: Found 0 stateful pods, waiting for 3
Aug 26 17:24:11.587: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 17:24:11.587: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 17:24:11.587: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 26 17:24:21.586: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 17:24:21.587: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 17:24:21.587: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 26 17:24:21.612: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 26 17:24:31.818: INFO: Updating stateful set ss2
Aug 26 17:24:32.160: INFO: Waiting for Pod statefulset-1544/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Aug 26 17:24:42.829: INFO: Found 2 stateful pods, waiting for 3
Aug 26 17:24:52.833: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 17:24:52.833: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 17:24:52.833: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 26 17:25:02.834: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 17:25:02.834: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 17:25:02.834: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 26 17:25:02.859: INFO: Updating stateful set ss2
Aug 26 17:25:02.943: INFO: Waiting for Pod statefulset-1544/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 17:25:13.008: INFO: Updating stateful set ss2
Aug 26 17:25:13.013: INFO: Waiting for StatefulSet statefulset-1544/ss2 to complete update
Aug 26 17:25:13.013: INFO: Waiting for Pod statefulset-1544/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 17:25:23.022: INFO: Waiting for StatefulSet statefulset-1544/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 26 17:25:33.021: INFO: Deleting all statefulset in ns statefulset-1544
Aug 26 17:25:33.024: INFO: Scaling statefulset ss2 to 0
Aug 26 17:25:53.055: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 17:25:53.058: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:25:53.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1544" for this suite.

• [SLOW TEST:113.480 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":170,"skipped":2990,"failed":0}
SS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:25:53.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:25:53.148: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 26 17:25:53.223: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 26 17:25:58.227: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 26 17:25:58.227: INFO: Creating deployment "test-rolling-update-deployment"
Aug 26 17:25:58.232: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 26 17:25:58.237: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 26 17:26:00.245: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 26 17:26:00.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059558, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059558, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059558, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059558, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:26:02.297: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059558, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059558, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059558, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059558, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:26:04.470: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 26 17:26:05.050: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-483 /apis/apps/v1/namespaces/deployment-483/deployments/test-rolling-update-deployment 37e6fbf6-5361-4c4c-ba32-16bf6a28d286 1110646 1 2020-08-26 17:25:58 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2020-08-26 17:25:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-26 17:26:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005ae74d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-26 17:25:58 +0000 UTC,LastTransitionTime:2020-08-26 17:25:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-08-26 17:26:03 +0000 UTC,LastTransitionTime:2020-08-26 17:25:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 26 17:26:05.274: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-483 /apis/apps/v1/namespaces/deployment-483/replicasets/test-rolling-update-deployment-59d5cb45c7 f421e749-9f8c-47c7-bd36-f93c41f896e8 1110635 1 2020-08-26 17:25:58 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 37e6fbf6-5361-4c4c-ba32-16bf6a28d286 0xc005ae7a37 0xc005ae7a38}] []  [{kube-controller-manager Update apps/v1 2020-08-26 17:26:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 55 101 54 102 98 102 54 45 53 51 54 49 45 52 99 52 99 45 98 97 51 50 45 49 54 98 102 54 97 50 56 100 50 56 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005ae7ac8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 26 17:26:05.274: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 26 17:26:05.274: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-483 /apis/apps/v1/namespaces/deployment-483/replicasets/test-rolling-update-controller b9e328a9-3c25-4f45-ba11-5c202029fffe 1110645 2 2020-08-26 17:25:53 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 37e6fbf6-5361-4c4c-ba32-16bf6a28d286 0xc005ae7927 0xc005ae7928}] []  [{e2e.test Update apps/v1 2020-08-26 17:25:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-26 17:26:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 55 101 54 102 98 102 54 45 53 51 54 49 45 52 99 52 99 45 98 97 51 50 45 49 54 98 102 54 97 50 56 100 50 56 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005ae79c8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 26 17:26:05.278: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-w7vkt" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-w7vkt test-rolling-update-deployment-59d5cb45c7- deployment-483 /api/v1/namespaces/deployment-483/pods/test-rolling-update-deployment-59d5cb45c7-w7vkt b0f7d607-fab3-4344-9418-438f3fe25d95 1110634 0 2020-08-26 17:25:58 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 f421e749-9f8c-47c7-bd36-f93c41f896e8 0xc005ae7fa7 0xc005ae7fa8}] []  [{kube-controller-manager Update v1 2020-08-26 17:25:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 52 50 49 101 55 52 57 45 57 102 56 99 45 52 55 99 55 45 98 100 51 54 45 102 57 51 99 52 49 102 56 57 54 101 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:26:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q6thc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q6thc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q6thc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:25:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:26:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:26:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:25:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.6,StartTime:2020-08-26 17:25:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 17:26:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://0b54680282989e69493ecc556f991cf564df5fbc9f0ec7453618cad9ca54e9b5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:26:05.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-483" for this suite.

• [SLOW TEST:12.376 seconds]
[sig-apps] Deployment
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":171,"skipped":2992,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:26:05.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 26 17:26:05.639: INFO: Waiting up to 5m0s for pod "pod-0dfb1181-7d34-4ee9-a65b-68a68162a9ab" in namespace "emptydir-375" to be "Succeeded or Failed"
Aug 26 17:26:05.657: INFO: Pod "pod-0dfb1181-7d34-4ee9-a65b-68a68162a9ab": Phase="Pending", Reason="", readiness=false. Elapsed: 17.182325ms
Aug 26 17:26:07.859: INFO: Pod "pod-0dfb1181-7d34-4ee9-a65b-68a68162a9ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220043719s
Aug 26 17:26:09.864: INFO: Pod "pod-0dfb1181-7d34-4ee9-a65b-68a68162a9ab": Phase="Running", Reason="", readiness=true. Elapsed: 4.225022337s
Aug 26 17:26:11.949: INFO: Pod "pod-0dfb1181-7d34-4ee9-a65b-68a68162a9ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.309365415s
STEP: Saw pod success
Aug 26 17:26:11.949: INFO: Pod "pod-0dfb1181-7d34-4ee9-a65b-68a68162a9ab" satisfied condition "Succeeded or Failed"
Aug 26 17:26:12.219: INFO: Trying to get logs from node kali-worker2 pod pod-0dfb1181-7d34-4ee9-a65b-68a68162a9ab container test-container: 
STEP: delete the pod
Aug 26 17:26:12.318: INFO: Waiting for pod pod-0dfb1181-7d34-4ee9-a65b-68a68162a9ab to disappear
Aug 26 17:26:12.338: INFO: Pod pod-0dfb1181-7d34-4ee9-a65b-68a68162a9ab no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:26:12.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-375" for this suite.

• [SLOW TEST:6.896 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2993,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:26:12.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:26:18.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6579" for this suite.

• [SLOW TEST:6.176 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":173,"skipped":3018,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:26:18.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 17:26:20.703: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 17:26:22.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059580, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059580, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059581, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059580, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:26:24.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059580, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059580, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059581, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734059580, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 17:26:27.774: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:26:27.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6440-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:26:28.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2075" for this suite.
STEP: Destroying namespace "webhook-2075-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.473 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":174,"skipped":3019,"failed":0}
SSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:26:29.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-5f8a600d-5fe3-4b78-98fa-4d011eee151e in namespace container-probe-7766
Aug 26 17:26:37.090: INFO: Started pod liveness-5f8a600d-5fe3-4b78-98fa-4d011eee151e in namespace container-probe-7766
STEP: checking the pod's current state and verifying that restartCount is present
Aug 26 17:26:37.093: INFO: Initial restart count of pod liveness-5f8a600d-5fe3-4b78-98fa-4d011eee151e is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:30:41.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7766" for this suite.

• [SLOW TEST:252.698 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":3025,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:30:41.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Aug 26 17:30:43.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5288'
Aug 26 17:30:54.368: INFO: stderr: ""
Aug 26 17:30:54.368: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 26 17:30:55.373: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 17:30:55.373: INFO: Found 0 / 1
Aug 26 17:30:56.373: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 17:30:56.373: INFO: Found 0 / 1
Aug 26 17:30:57.373: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 17:30:57.373: INFO: Found 0 / 1
Aug 26 17:30:58.378: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 17:30:58.378: INFO: Found 0 / 1
Aug 26 17:30:59.696: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 17:30:59.696: INFO: Found 1 / 1
Aug 26 17:30:59.696: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 26 17:30:59.699: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 17:30:59.699: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 26 17:30:59.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config patch pod agnhost-master-xgvvq --namespace=kubectl-5288 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 26 17:30:59.798: INFO: stderr: ""
Aug 26 17:30:59.798: INFO: stdout: "pod/agnhost-master-xgvvq patched\n"
STEP: checking annotations
Aug 26 17:30:59.973: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 17:30:59.974: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:30:59.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5288" for this suite.

• [SLOW TEST:18.282 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":275,"completed":176,"skipped":3035,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:30:59.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-projected-all-test-volume-11b662f0-3fdd-4bab-834e-4762d6b1d910
STEP: Creating secret with name secret-projected-all-test-volume-5a2d5bed-8648-4fbf-94b4-2001edce488c
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug 26 17:31:00.403: INFO: Waiting up to 5m0s for pod "projected-volume-4fc908a3-a059-4398-b4b1-dffbcc8018e2" in namespace "projected-3581" to be "Succeeded or Failed"
Aug 26 17:31:00.510: INFO: Pod "projected-volume-4fc908a3-a059-4398-b4b1-dffbcc8018e2": Phase="Pending", Reason="", readiness=false. Elapsed: 106.896921ms
Aug 26 17:31:02.795: INFO: Pod "projected-volume-4fc908a3-a059-4398-b4b1-dffbcc8018e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.392126967s
Aug 26 17:31:05.085: INFO: Pod "projected-volume-4fc908a3-a059-4398-b4b1-dffbcc8018e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.682312141s
Aug 26 17:31:07.121: INFO: Pod "projected-volume-4fc908a3-a059-4398-b4b1-dffbcc8018e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.718036625s
STEP: Saw pod success
Aug 26 17:31:07.121: INFO: Pod "projected-volume-4fc908a3-a059-4398-b4b1-dffbcc8018e2" satisfied condition "Succeeded or Failed"
Aug 26 17:31:07.235: INFO: Trying to get logs from node kali-worker2 pod projected-volume-4fc908a3-a059-4398-b4b1-dffbcc8018e2 container projected-all-volume-test: 
STEP: delete the pod
Aug 26 17:31:08.089: INFO: Waiting for pod projected-volume-4fc908a3-a059-4398-b4b1-dffbcc8018e2 to disappear
Aug 26 17:31:08.102: INFO: Pod projected-volume-4fc908a3-a059-4398-b4b1-dffbcc8018e2 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:31:08.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3581" for this suite.

• [SLOW TEST:8.126 seconds]
[sig-storage] Projected combined
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":177,"skipped":3067,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:31:08.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9827.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9827.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9827.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9827.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9827.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9827.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 17:31:20.767: INFO: DNS probes using dns-9827/dns-test-3c492240-4d78-4c3c-946b-acc844e3e24b succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:31:21.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9827" for this suite.

• [SLOW TEST:13.889 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":178,"skipped":3076,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:31:21.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:31:34.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9554" for this suite.

• [SLOW TEST:12.631 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":3102,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:31:34.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:31:36.390: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 26 17:31:36.403: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:36.439: INFO: Number of nodes with available pods: 0
Aug 26 17:31:36.439: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:31:37.444: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:37.447: INFO: Number of nodes with available pods: 0
Aug 26 17:31:37.447: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:31:38.477: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:38.481: INFO: Number of nodes with available pods: 0
Aug 26 17:31:38.481: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:31:39.653: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:39.682: INFO: Number of nodes with available pods: 0
Aug 26 17:31:39.682: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:31:40.762: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:40.767: INFO: Number of nodes with available pods: 0
Aug 26 17:31:40.767: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:31:41.445: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:41.449: INFO: Number of nodes with available pods: 0
Aug 26 17:31:41.449: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:31:43.051: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:43.320: INFO: Number of nodes with available pods: 2
Aug 26 17:31:43.320: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 26 17:31:43.725: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:43.725: INFO: Wrong image for pod: daemon-set-t54gj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:43.740: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:44.809: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:44.809: INFO: Wrong image for pod: daemon-set-t54gj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:44.814: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:45.835: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:45.835: INFO: Wrong image for pod: daemon-set-t54gj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:45.838: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:46.745: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:46.745: INFO: Wrong image for pod: daemon-set-t54gj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:46.750: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:47.758: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:47.758: INFO: Wrong image for pod: daemon-set-t54gj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:47.761: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:48.746: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:48.746: INFO: Wrong image for pod: daemon-set-t54gj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:48.746: INFO: Pod daemon-set-t54gj is not available
Aug 26 17:31:48.750: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:49.782: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:49.782: INFO: Wrong image for pod: daemon-set-t54gj. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:49.782: INFO: Pod daemon-set-t54gj is not available
Aug 26 17:31:49.785: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:51.030: INFO: Pod daemon-set-9b96x is not available
Aug 26 17:31:51.030: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:51.034: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:51.745: INFO: Pod daemon-set-9b96x is not available
Aug 26 17:31:51.745: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:51.748: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:53.090: INFO: Pod daemon-set-9b96x is not available
Aug 26 17:31:53.090: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:53.548: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:53.964: INFO: Pod daemon-set-9b96x is not available
Aug 26 17:31:53.965: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:53.969: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:54.877: INFO: Pod daemon-set-9b96x is not available
Aug 26 17:31:54.877: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:54.881: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:55.776: INFO: Pod daemon-set-9b96x is not available
Aug 26 17:31:55.776: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:55.780: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:56.748: INFO: Pod daemon-set-9b96x is not available
Aug 26 17:31:56.748: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:56.752: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:57.970: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:57.973: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:58.751: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:58.754: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:31:59.823: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:31:59.827: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:32:01.188: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:32:01.419: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:32:01.745: INFO: Wrong image for pod: daemon-set-k55fb. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Aug 26 17:32:01.745: INFO: Pod daemon-set-k55fb is not available
Aug 26 17:32:01.749: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:32:03.051: INFO: Pod daemon-set-7hm84 is not available
Aug 26 17:32:03.471: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 26 17:32:03.474: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:32:03.990: INFO: Number of nodes with available pods: 1
Aug 26 17:32:03.990: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:32:04.996: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:32:04.999: INFO: Number of nodes with available pods: 1
Aug 26 17:32:04.999: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:32:05.996: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:32:06.000: INFO: Number of nodes with available pods: 1
Aug 26 17:32:06.000: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:32:06.997: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:32:07.001: INFO: Number of nodes with available pods: 1
Aug 26 17:32:07.001: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:32:07.996: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:32:07.999: INFO: Number of nodes with available pods: 1
Aug 26 17:32:07.999: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:32:08.995: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:32:08.998: INFO: Number of nodes with available pods: 2
Aug 26 17:32:08.998: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7482, will wait for the garbage collector to delete the pods
Aug 26 17:32:09.069: INFO: Deleting DaemonSet.extensions daemon-set took: 5.799036ms
Aug 26 17:32:09.370: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.236586ms
Aug 26 17:32:18.373: INFO: Number of nodes with available pods: 0
Aug 26 17:32:18.373: INFO: Number of running nodes: 0, number of available pods: 0
Aug 26 17:32:18.376: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7482/daemonsets","resourceVersion":"1111999"},"items":null}

Aug 26 17:32:18.379: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7482/pods","resourceVersion":"1111999"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:32:18.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7482" for this suite.

• [SLOW TEST:43.776 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":180,"skipped":3114,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:32:18.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:32:18.557: INFO: Waiting up to 5m0s for pod "busybox-user-65534-f4c59efb-5221-4919-bae3-7b31bd8c06a0" in namespace "security-context-test-9860" to be "Succeeded or Failed"
Aug 26 17:32:18.610: INFO: Pod "busybox-user-65534-f4c59efb-5221-4919-bae3-7b31bd8c06a0": Phase="Pending", Reason="", readiness=false. Elapsed: 53.750053ms
Aug 26 17:32:20.615: INFO: Pod "busybox-user-65534-f4c59efb-5221-4919-bae3-7b31bd8c06a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058555977s
Aug 26 17:32:22.751: INFO: Pod "busybox-user-65534-f4c59efb-5221-4919-bae3-7b31bd8c06a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194685311s
Aug 26 17:32:24.784: INFO: Pod "busybox-user-65534-f4c59efb-5221-4919-bae3-7b31bd8c06a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.227370872s
Aug 26 17:32:24.784: INFO: Pod "busybox-user-65534-f4c59efb-5221-4919-bae3-7b31bd8c06a0" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:32:24.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9860" for this suite.

• [SLOW TEST:6.386 seconds]
[k8s.io] Security Context
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a container with runAsUser
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3122,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:32:24.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 26 17:32:25.262: INFO: Waiting up to 5m0s for pod "pod-59c856b0-80ed-4ffc-a270-0b812fbacd6f" in namespace "emptydir-2823" to be "Succeeded or Failed"
Aug 26 17:32:25.469: INFO: Pod "pod-59c856b0-80ed-4ffc-a270-0b812fbacd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 206.867898ms
Aug 26 17:32:28.066: INFO: Pod "pod-59c856b0-80ed-4ffc-a270-0b812fbacd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.803589493s
Aug 26 17:32:30.176: INFO: Pod "pod-59c856b0-80ed-4ffc-a270-0b812fbacd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.913982711s
Aug 26 17:32:32.179: INFO: Pod "pod-59c856b0-80ed-4ffc-a270-0b812fbacd6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.917209389s
STEP: Saw pod success
Aug 26 17:32:32.179: INFO: Pod "pod-59c856b0-80ed-4ffc-a270-0b812fbacd6f" satisfied condition "Succeeded or Failed"
Aug 26 17:32:32.183: INFO: Trying to get logs from node kali-worker2 pod pod-59c856b0-80ed-4ffc-a270-0b812fbacd6f container test-container: 
STEP: delete the pod
Aug 26 17:32:32.393: INFO: Waiting for pod pod-59c856b0-80ed-4ffc-a270-0b812fbacd6f to disappear
Aug 26 17:32:32.423: INFO: Pod pod-59c856b0-80ed-4ffc-a270-0b812fbacd6f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:32:32.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2823" for this suite.

• [SLOW TEST:7.653 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3132,"failed":0}
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:32:32.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-3170
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Aug 26 17:32:33.549: INFO: Found 0 stateful pods, waiting for 3
Aug 26 17:32:43.565: INFO: Found 2 stateful pods, waiting for 3
Aug 26 17:32:53.553: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 17:32:53.553: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 17:32:53.553: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 26 17:32:53.565: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3170 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 17:32:54.125: INFO: stderr: "I0826 17:32:53.695833    3678 log.go:172] (0xc0008ee630) (0xc0005fb5e0) Create stream\nI0826 17:32:53.695896    3678 log.go:172] (0xc0008ee630) (0xc0005fb5e0) Stream added, broadcasting: 1\nI0826 17:32:53.699363    3678 log.go:172] (0xc0008ee630) Reply frame received for 1\nI0826 17:32:53.699408    3678 log.go:172] (0xc0008ee630) (0xc0009ae000) Create stream\nI0826 17:32:53.699420    3678 log.go:172] (0xc0008ee630) (0xc0009ae000) Stream added, broadcasting: 3\nI0826 17:32:53.700585    3678 log.go:172] (0xc0008ee630) Reply frame received for 3\nI0826 17:32:53.700643    3678 log.go:172] (0xc0008ee630) (0xc000533680) Create stream\nI0826 17:32:53.700664    3678 log.go:172] (0xc0008ee630) (0xc000533680) Stream added, broadcasting: 5\nI0826 17:32:53.701851    3678 log.go:172] (0xc0008ee630) Reply frame received for 5\nI0826 17:32:53.759802    3678 log.go:172] (0xc0008ee630) Data frame received for 5\nI0826 17:32:53.759831    3678 log.go:172] (0xc000533680) (5) Data frame handling\nI0826 17:32:53.759851    3678 log.go:172] (0xc000533680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 17:32:54.116061    3678 log.go:172] (0xc0008ee630) Data frame received for 3\nI0826 17:32:54.116127    3678 log.go:172] (0xc0008ee630) Data frame received for 5\nI0826 17:32:54.116171    3678 log.go:172] (0xc000533680) (5) Data frame handling\nI0826 17:32:54.116209    3678 log.go:172] (0xc0009ae000) (3) Data frame handling\nI0826 17:32:54.116229    3678 log.go:172] (0xc0009ae000) (3) Data frame sent\nI0826 17:32:54.116246    3678 log.go:172] (0xc0008ee630) Data frame received for 3\nI0826 17:32:54.116259    3678 log.go:172] (0xc0009ae000) (3) Data frame handling\nI0826 17:32:54.117820    3678 log.go:172] (0xc0008ee630) Data frame received for 1\nI0826 17:32:54.117916    3678 log.go:172] (0xc0005fb5e0) (1) Data frame handling\nI0826 17:32:54.117948    3678 log.go:172] (0xc0005fb5e0) (1) Data frame sent\nI0826 17:32:54.117975    3678 log.go:172] (0xc0008ee630) (0xc0005fb5e0) Stream removed, broadcasting: 1\nI0826 17:32:54.118001    3678 log.go:172] (0xc0008ee630) Go away received\nI0826 17:32:54.119036    3678 log.go:172] (0xc0008ee630) (0xc0005fb5e0) Stream removed, broadcasting: 1\nI0826 17:32:54.119062    3678 log.go:172] (0xc0008ee630) (0xc0009ae000) Stream removed, broadcasting: 3\nI0826 17:32:54.119075    3678 log.go:172] (0xc0008ee630) (0xc000533680) Stream removed, broadcasting: 5\n"
Aug 26 17:32:54.126: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 17:32:54.126: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 26 17:33:04.202: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 26 17:33:14.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3170 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 17:33:14.509: INFO: stderr: "I0826 17:33:14.387706    3699 log.go:172] (0xc000626a50) (0xc00069a1e0) Create stream\nI0826 17:33:14.387772    3699 log.go:172] (0xc000626a50) (0xc00069a1e0) Stream added, broadcasting: 1\nI0826 17:33:14.390242    3699 log.go:172] (0xc000626a50) Reply frame received for 1\nI0826 17:33:14.390286    3699 log.go:172] (0xc000626a50) (0xc0006cd2c0) Create stream\nI0826 17:33:14.390296    3699 log.go:172] (0xc000626a50) (0xc0006cd2c0) Stream added, broadcasting: 3\nI0826 17:33:14.391249    3699 log.go:172] (0xc000626a50) Reply frame received for 3\nI0826 17:33:14.391288    3699 log.go:172] (0xc000626a50) (0xc00069a280) Create stream\nI0826 17:33:14.391301    3699 log.go:172] (0xc000626a50) (0xc00069a280) Stream added, broadcasting: 5\nI0826 17:33:14.392136    3699 log.go:172] (0xc000626a50) Reply frame received for 5\nI0826 17:33:14.493738    3699 log.go:172] (0xc000626a50) Data frame received for 3\nI0826 17:33:14.493769    3699 log.go:172] (0xc0006cd2c0) (3) Data frame handling\nI0826 17:33:14.493778    3699 log.go:172] (0xc0006cd2c0) (3) Data frame sent\nI0826 17:33:14.493783    3699 log.go:172] (0xc000626a50) Data frame received for 3\nI0826 17:33:14.493788    3699 log.go:172] (0xc0006cd2c0) (3) Data frame handling\nI0826 17:33:14.493808    3699 log.go:172] (0xc000626a50) Data frame received for 5\nI0826 17:33:14.493813    3699 log.go:172] (0xc00069a280) (5) Data frame handling\nI0826 17:33:14.493822    3699 log.go:172] (0xc00069a280) (5) Data frame sent\nI0826 17:33:14.493832    3699 log.go:172] (0xc000626a50) Data frame received for 5\nI0826 17:33:14.493846    3699 log.go:172] (0xc00069a280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 17:33:14.495283    3699 log.go:172] (0xc000626a50) Data frame received for 1\nI0826 17:33:14.495302    3699 log.go:172] (0xc00069a1e0) (1) Data frame handling\nI0826 17:33:14.495321    3699 log.go:172] (0xc00069a1e0) (1) Data frame sent\nI0826 17:33:14.495342    3699 log.go:172] (0xc000626a50) (0xc00069a1e0) Stream removed, broadcasting: 1\nI0826 17:33:14.495494    3699 log.go:172] (0xc000626a50) Go away received\nI0826 17:33:14.495650    3699 log.go:172] (0xc000626a50) (0xc00069a1e0) Stream removed, broadcasting: 1\nI0826 17:33:14.495665    3699 log.go:172] (0xc000626a50) (0xc0006cd2c0) Stream removed, broadcasting: 3\nI0826 17:33:14.495670    3699 log.go:172] (0xc000626a50) (0xc00069a280) Stream removed, broadcasting: 5\n"
Aug 26 17:33:14.509: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 17:33:14.509: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 17:33:24.525: INFO: Waiting for StatefulSet statefulset-3170/ss2 to complete update
Aug 26 17:33:24.525: INFO: Waiting for Pod statefulset-3170/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 17:33:24.525: INFO: Waiting for Pod statefulset-3170/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 17:33:24.525: INFO: Waiting for Pod statefulset-3170/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 17:33:34.533: INFO: Waiting for StatefulSet statefulset-3170/ss2 to complete update
Aug 26 17:33:34.533: INFO: Waiting for Pod statefulset-3170/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 17:33:34.533: INFO: Waiting for Pod statefulset-3170/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 26 17:33:45.265: INFO: Waiting for StatefulSet statefulset-3170/ss2 to complete update
Aug 26 17:33:45.265: INFO: Waiting for Pod statefulset-3170/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Rolling back to a previous revision
Aug 26 17:33:54.552: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3170 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 26 17:33:55.302: INFO: stderr: "I0826 17:33:55.049089    3720 log.go:172] (0xc000736b00) (0xc0006e8140) Create stream\nI0826 17:33:55.049138    3720 log.go:172] (0xc000736b00) (0xc0006e8140) Stream added, broadcasting: 1\nI0826 17:33:55.050946    3720 log.go:172] (0xc000736b00) Reply frame received for 1\nI0826 17:33:55.050977    3720 log.go:172] (0xc000736b00) (0xc000208000) Create stream\nI0826 17:33:55.050984    3720 log.go:172] (0xc000736b00) (0xc000208000) Stream added, broadcasting: 3\nI0826 17:33:55.051603    3720 log.go:172] (0xc000736b00) Reply frame received for 3\nI0826 17:33:55.051631    3720 log.go:172] (0xc000736b00) (0xc0005d1220) Create stream\nI0826 17:33:55.051640    3720 log.go:172] (0xc000736b00) (0xc0005d1220) Stream added, broadcasting: 5\nI0826 17:33:55.052207    3720 log.go:172] (0xc000736b00) Reply frame received for 5\nI0826 17:33:55.104322    3720 log.go:172] (0xc000736b00) Data frame received for 5\nI0826 17:33:55.104354    3720 log.go:172] (0xc0005d1220) (5) Data frame handling\nI0826 17:33:55.104376    3720 log.go:172] (0xc0005d1220) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0826 17:33:55.291866    3720 log.go:172] (0xc000736b00) Data frame received for 5\nI0826 17:33:55.291902    3720 log.go:172] (0xc0005d1220) (5) Data frame handling\nI0826 17:33:55.291945    3720 log.go:172] (0xc000736b00) Data frame received for 3\nI0826 17:33:55.291962    3720 log.go:172] (0xc000208000) (3) Data frame handling\nI0826 17:33:55.291978    3720 log.go:172] (0xc000208000) (3) Data frame sent\nI0826 17:33:55.291993    3720 log.go:172] (0xc000736b00) Data frame received for 3\nI0826 17:33:55.292004    3720 log.go:172] (0xc000208000) (3) Data frame handling\nI0826 17:33:55.292395    3720 log.go:172] (0xc000736b00) Data frame received for 1\nI0826 17:33:55.292502    3720 log.go:172] (0xc0006e8140) (1) Data frame handling\nI0826 17:33:55.292542    3720 log.go:172] (0xc0006e8140) (1) Data frame sent\nI0826 17:33:55.292566    3720 log.go:172] (0xc000736b00) (0xc0006e8140) Stream removed, broadcasting: 1\nI0826 17:33:55.292600    3720 log.go:172] (0xc000736b00) Go away received\nI0826 17:33:55.293089    3720 log.go:172] (0xc000736b00) (0xc0006e8140) Stream removed, broadcasting: 1\nI0826 17:33:55.293106    3720 log.go:172] (0xc000736b00) (0xc000208000) Stream removed, broadcasting: 3\nI0826 17:33:55.293115    3720 log.go:172] (0xc000736b00) (0xc0005d1220) Stream removed, broadcasting: 5\n"
Aug 26 17:33:55.302: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 26 17:33:55.302: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 26 17:34:05.838: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 26 17:34:15.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3170 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 26 17:34:16.139: INFO: stderr: "I0826 17:34:16.063332    3740 log.go:172] (0xc00003a6e0) (0xc0008ba000) Create stream\nI0826 17:34:16.063376    3740 log.go:172] (0xc00003a6e0) (0xc0008ba000) Stream added, broadcasting: 1\nI0826 17:34:16.065131    3740 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0826 17:34:16.065152    3740 log.go:172] (0xc00003a6e0) (0xc000434b40) Create stream\nI0826 17:34:16.065159    3740 log.go:172] (0xc00003a6e0) (0xc000434b40) Stream added, broadcasting: 3\nI0826 17:34:16.065780    3740 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0826 17:34:16.065803    3740 log.go:172] (0xc00003a6e0) (0xc00076f2c0) Create stream\nI0826 17:34:16.065823    3740 log.go:172] (0xc00003a6e0) (0xc00076f2c0) Stream added, broadcasting: 5\nI0826 17:34:16.066676    3740 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0826 17:34:16.132605    3740 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0826 17:34:16.132630    3740 log.go:172] (0xc00076f2c0) (5) Data frame handling\nI0826 17:34:16.132636    3740 log.go:172] (0xc00076f2c0) (5) Data frame sent\nI0826 17:34:16.132641    3740 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0826 17:34:16.132645    3740 log.go:172] (0xc00076f2c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0826 17:34:16.132659    3740 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0826 17:34:16.132663    3740 log.go:172] (0xc000434b40) (3) Data frame handling\nI0826 17:34:16.132668    3740 log.go:172] (0xc000434b40) (3) Data frame sent\nI0826 17:34:16.132672    3740 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0826 17:34:16.132676    3740 log.go:172] (0xc000434b40) (3) Data frame handling\nI0826 17:34:16.133786    3740 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0826 17:34:16.133806    3740 log.go:172] (0xc0008ba000) (1) Data frame handling\nI0826 17:34:16.133814    3740 log.go:172] (0xc0008ba000) (1) Data frame sent\nI0826 17:34:16.133824    3740 log.go:172] (0xc00003a6e0) (0xc0008ba000) Stream removed, broadcasting: 1\nI0826 17:34:16.133834    3740 log.go:172] (0xc00003a6e0) Go away received\nI0826 17:34:16.134111    3740 log.go:172] (0xc00003a6e0) (0xc0008ba000) Stream removed, broadcasting: 1\nI0826 17:34:16.134127    3740 log.go:172] (0xc00003a6e0) (0xc000434b40) Stream removed, broadcasting: 3\nI0826 17:34:16.134133    3740 log.go:172] (0xc00003a6e0) (0xc00076f2c0) Stream removed, broadcasting: 5\n"
Aug 26 17:34:16.139: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 26 17:34:16.139: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 26 17:34:27.407: INFO: Waiting for StatefulSet statefulset-3170/ss2 to complete update
Aug 26 17:34:27.408: INFO: Waiting for Pod statefulset-3170/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 26 17:34:27.408: INFO: Waiting for Pod statefulset-3170/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 26 17:34:37.415: INFO: Waiting for StatefulSet statefulset-3170/ss2 to complete update
Aug 26 17:34:37.415: INFO: Waiting for Pod statefulset-3170/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 26 17:34:37.415: INFO: Waiting for Pod statefulset-3170/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 26 17:34:47.415: INFO: Waiting for StatefulSet statefulset-3170/ss2 to complete update
Aug 26 17:34:47.415: INFO: Waiting for Pod statefulset-3170/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 26 17:34:57.415: INFO: Deleting all statefulset in ns statefulset-3170
Aug 26 17:34:57.417: INFO: Scaling statefulset ss2 to 0
Aug 26 17:35:18.470: INFO: Waiting for statefulset status.replicas updated to 0
Aug 26 17:35:18.473: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:35:18.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3170" for this suite.

• [SLOW TEST:166.053 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":183,"skipped":3138,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:35:18.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-c8c98dd2-7921-4c08-8f69-6aa307769956
STEP: Creating a pod to test consume secrets
Aug 26 17:35:18.591: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2d3cbc44-fc5d-4e6b-813e-f8f7c0959e26" in namespace "projected-5070" to be "Succeeded or Failed"
Aug 26 17:35:18.594: INFO: Pod "pod-projected-secrets-2d3cbc44-fc5d-4e6b-813e-f8f7c0959e26": Phase="Pending", Reason="", readiness=false. Elapsed: 3.19273ms
Aug 26 17:35:20.599: INFO: Pod "pod-projected-secrets-2d3cbc44-fc5d-4e6b-813e-f8f7c0959e26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007428431s
Aug 26 17:35:22.627: INFO: Pod "pod-projected-secrets-2d3cbc44-fc5d-4e6b-813e-f8f7c0959e26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035424487s
Aug 26 17:35:24.702: INFO: Pod "pod-projected-secrets-2d3cbc44-fc5d-4e6b-813e-f8f7c0959e26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11094622s
STEP: Saw pod success
Aug 26 17:35:24.702: INFO: Pod "pod-projected-secrets-2d3cbc44-fc5d-4e6b-813e-f8f7c0959e26" satisfied condition "Succeeded or Failed"
Aug 26 17:35:24.716: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-2d3cbc44-fc5d-4e6b-813e-f8f7c0959e26 container projected-secret-volume-test: 
STEP: delete the pod
Aug 26 17:35:24.790: INFO: Waiting for pod pod-projected-secrets-2d3cbc44-fc5d-4e6b-813e-f8f7c0959e26 to disappear
Aug 26 17:35:24.837: INFO: Pod pod-projected-secrets-2d3cbc44-fc5d-4e6b-813e-f8f7c0959e26 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:35:24.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5070" for this suite.

• [SLOW TEST:6.354 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":184,"skipped":3139,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:35:24.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0826 17:35:35.262766       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 17:35:35.262: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:35:35.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4216" for this suite.

• [SLOW TEST:10.415 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":185,"skipped":3166,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:35:35.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-02941080-ef8e-4396-ae1e-d5feb831bc6f
STEP: Creating a pod to test consume configMaps
Aug 26 17:35:35.773: INFO: Waiting up to 5m0s for pod "pod-configmaps-d42e3510-736f-47e7-ad2d-098bb1a2652d" in namespace "configmap-7251" to be "Succeeded or Failed"
Aug 26 17:35:35.965: INFO: Pod "pod-configmaps-d42e3510-736f-47e7-ad2d-098bb1a2652d": Phase="Pending", Reason="", readiness=false. Elapsed: 192.359767ms
Aug 26 17:35:37.969: INFO: Pod "pod-configmaps-d42e3510-736f-47e7-ad2d-098bb1a2652d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196572755s
Aug 26 17:35:39.973: INFO: Pod "pod-configmaps-d42e3510-736f-47e7-ad2d-098bb1a2652d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200714038s
Aug 26 17:35:41.981: INFO: Pod "pod-configmaps-d42e3510-736f-47e7-ad2d-098bb1a2652d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.208472892s
STEP: Saw pod success
Aug 26 17:35:41.981: INFO: Pod "pod-configmaps-d42e3510-736f-47e7-ad2d-098bb1a2652d" satisfied condition "Succeeded or Failed"
Aug 26 17:35:42.271: INFO: Trying to get logs from node kali-worker pod pod-configmaps-d42e3510-736f-47e7-ad2d-098bb1a2652d container configmap-volume-test: 
STEP: delete the pod
Aug 26 17:35:42.420: INFO: Waiting for pod pod-configmaps-d42e3510-736f-47e7-ad2d-098bb1a2652d to disappear
Aug 26 17:35:42.455: INFO: Pod pod-configmaps-d42e3510-736f-47e7-ad2d-098bb1a2652d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:35:42.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7251" for this suite.

• [SLOW TEST:7.190 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3188,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:35:42.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-b82495e5-6804-471b-b3e0-4b5a6f4f6ee2
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:35:42.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6827" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":187,"skipped":3208,"failed":0}

------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:35:42.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 26 17:35:54.896: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 17:35:54.907: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 17:35:56.907: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 17:35:56.912: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 17:35:58.907: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 17:35:58.956: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 17:36:00.907: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 17:36:00.911: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 17:36:02.907: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 17:36:02.912: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 17:36:04.907: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 17:36:04.934: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 17:36:06.907: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 17:36:06.913: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 26 17:36:08.907: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 26 17:36:10.362: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:36:10.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8179" for this suite.

• [SLOW TEST:27.860 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":188,"skipped":3208,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:36:10.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:36:10.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9521" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":189,"skipped":3219,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:36:10.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-7bcfe169-c7d0-4f05-b13e-3fc26e3a7d3a
STEP: Creating a pod to test consume secrets
Aug 26 17:36:10.813: INFO: Waiting up to 5m0s for pod "pod-secrets-a1fae8c3-22f0-4137-b6e0-665d3f99c8fd" in namespace "secrets-948" to be "Succeeded or Failed"
Aug 26 17:36:10.872: INFO: Pod "pod-secrets-a1fae8c3-22f0-4137-b6e0-665d3f99c8fd": Phase="Pending", Reason="", readiness=false. Elapsed: 58.399305ms
Aug 26 17:36:12.929: INFO: Pod "pod-secrets-a1fae8c3-22f0-4137-b6e0-665d3f99c8fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115472478s
Aug 26 17:36:14.940: INFO: Pod "pod-secrets-a1fae8c3-22f0-4137-b6e0-665d3f99c8fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126600637s
Aug 26 17:36:16.947: INFO: Pod "pod-secrets-a1fae8c3-22f0-4137-b6e0-665d3f99c8fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133589115s
STEP: Saw pod success
Aug 26 17:36:16.947: INFO: Pod "pod-secrets-a1fae8c3-22f0-4137-b6e0-665d3f99c8fd" satisfied condition "Succeeded or Failed"
Aug 26 17:36:16.949: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-a1fae8c3-22f0-4137-b6e0-665d3f99c8fd container secret-volume-test: 
STEP: delete the pod
Aug 26 17:36:17.261: INFO: Waiting for pod pod-secrets-a1fae8c3-22f0-4137-b6e0-665d3f99c8fd to disappear
Aug 26 17:36:17.299: INFO: Pod pod-secrets-a1fae8c3-22f0-4137-b6e0-665d3f99c8fd no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:36:17.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-948" for this suite.

• [SLOW TEST:6.766 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3257,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:36:17.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:36:24.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7976" for this suite.

• [SLOW TEST:7.181 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":191,"skipped":3270,"failed":0}
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:36:24.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:36:24.951: INFO: Creating deployment "webserver-deployment"
Aug 26 17:36:25.061: INFO: Waiting for observed generation 1
Aug 26 17:36:27.110: INFO: Waiting for all required pods to come up
Aug 26 17:36:27.114: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 26 17:36:41.319: INFO: Waiting for deployment "webserver-deployment" to complete
Aug 26 17:36:41.348: INFO: Updating deployment "webserver-deployment" with a non-existent image
Aug 26 17:36:41.401: INFO: Updating deployment webserver-deployment
Aug 26 17:36:41.401: INFO: Waiting for observed generation 2
Aug 26 17:36:43.816: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 26 17:36:43.819: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 26 17:36:44.163: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 26 17:36:44.875: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 26 17:36:44.876: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 26 17:36:44.991: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Aug 26 17:36:44.994: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Aug 26 17:36:44.994: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Aug 26 17:36:44.999: INFO: Updating deployment webserver-deployment
Aug 26 17:36:44.999: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Aug 26 17:36:46.196: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 26 17:36:49.200: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 26 17:36:50.199: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-2354 /apis/apps/v1/namespaces/deployment-2354/deployments/webserver-deployment e02d94c4-a219-4672-af08-33de18146663 1113664 3 2020-08-26 17:36:24 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-08-26 17:36:44 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ae0d58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-26 17:36:45 +0000 UTC,LastTransitionTime:2020-08-26 17:36:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-08-26 17:36:46 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Aug 26 17:36:50.922: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-2354 /apis/apps/v1/namespaces/deployment-2354/replicasets/webserver-deployment-6676bcd6d4 d5909727-20cc-4629-b7df-a355e4966c65 1113660 3 2020-08-26 17:36:41 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment e02d94c4-a219-4672-af08-33de18146663 0xc005f8dd27 0xc005f8dd28}] []  [{kube-controller-manager Update apps/v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 48 50 100 57 52 99 52 45 97 50 49 57 45 52 54 55 50 45 97 102 48 56 45 51 51 100 101 49 56 49 52 54 54 54 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005f8dda8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 26 17:36:50.922: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Aug 26 17:36:50.922: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-2354 /apis/apps/v1/namespaces/deployment-2354/replicasets/webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 1113653 3 2020-08-26 17:36:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment e02d94c4-a219-4672-af08-33de18146663 0xc005f8de07 0xc005f8de08}] []  [{kube-controller-manager Update apps/v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 48 50 100 57 52 99 52 45 97 50 49 57 45 52 54 55 50 45 97 102 48 56 45 51 51 100 101 49 56 49 52 54 54 54 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005f8de78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Aug 26 17:36:51.520: INFO: Pod "webserver-deployment-6676bcd6d4-4rnk5" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-4rnk5 webserver-deployment-6676bcd6d4- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-6676bcd6d4-4rnk5 d13ce437-128e-4fa0-a0fd-26c18728f968 1113695 0 2020-08-26 17:36:46 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d5909727-20cc-4629-b7df-a355e4966c65 0xc002ae11a7 0xc002ae11a8}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 57 48 57 55 50 55 45 50 48 99 99 45 52 54 50 57 45 98 55 100 102 45 97 51 53 53 101 52 57 54 54 99 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.520: INFO: Pod "webserver-deployment-6676bcd6d4-88zxk" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-88zxk webserver-deployment-6676bcd6d4- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-6676bcd6d4-88zxk 9e278d1d-8759-4198-ae59-5509da1f1314 1113674 0 2020-08-26 17:36:45 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d5909727-20cc-4629-b7df-a355e4966c65 0xc002ae1357 0xc002ae1358}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 57 48 57 55 50 55 45 50 48 99 99 45 52 54 50 57 45 98 55 100 102 45 97 51 53 53 101 52 57 54 54 99 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.521: INFO: Pod "webserver-deployment-6676bcd6d4-brswc" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-brswc webserver-deployment-6676bcd6d4- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-6676bcd6d4-brswc 43a681e7-9ff6-4a58-ad3c-70026ee46ef5 1113576 0 2020-08-26 17:36:41 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d5909727-20cc-4629-b7df-a355e4966c65 0xc002ae1507 0xc002ae1508}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:41 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 57 48 57 55 50 55 45 50 48 99 99 45 52 54 50 57 45 98 55 100 102 45 97 51 53 53 101 52 57 54 54 99 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:42 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-26 17:36:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.521: INFO: Pod "webserver-deployment-6676bcd6d4-dvwx4" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-dvwx4 webserver-deployment-6676bcd6d4- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-6676bcd6d4-dvwx4 d12a39d9-7360-48a6-acc7-2016671b4bd5 1113730 0 2020-08-26 17:36:42 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d5909727-20cc-4629-b7df-a355e4966c65 0xc002ae16b7 0xc002ae16b8}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 57 48 57 55 50 55 45 50 48 99 99 45 52 54 50 57 45 98 55 100 102 45 97 51 53 53 101 52 57 54 54 99 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:49 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 48 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.100,StartTime:2020-08-26 17:36:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.521: INFO: Pod "webserver-deployment-6676bcd6d4-hfshg" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hfshg webserver-deployment-6676bcd6d4- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-6676bcd6d4-hfshg e483be62-aef1-452e-9a1d-1b199641265c 1113687 0 2020-08-26 17:36:45 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d5909727-20cc-4629-b7df-a355e4966c65 0xc002ae18a7 0xc002ae18a8}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 57 48 57 55 50 55 45 50 48 99 99 45 52 54 50 57 45 98 55 100 102 45 97 51 53 53 101 52 57 54 54 99 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.522: INFO: Pod "webserver-deployment-6676bcd6d4-kxpzh" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-kxpzh webserver-deployment-6676bcd6d4- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-6676bcd6d4-kxpzh eeb46bcf-2a64-41b4-9cab-eea019e200be 1113728 0 2020-08-26 17:36:46 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d5909727-20cc-4629-b7df-a355e4966c65 0xc002ae1a67 0xc002ae1a68}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 57 48 57 55 50 55 45 50 48 99 99 45 52 54 50 57 45 98 55 100 102 45 97 51 53 53 101 52 57 54 54 99 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:49 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.522: INFO: Pod "webserver-deployment-6676bcd6d4-l9g4g" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-l9g4g webserver-deployment-6676bcd6d4- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-6676bcd6d4-l9g4g b0b49801-5073-437c-a2f6-7575ae892548 1113709 0 2020-08-26 17:36:46 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d5909727-20cc-4629-b7df-a355e4966c65 0xc002ae1c17 0xc002ae1c18}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 57 48 57 55 50 55 45 50 48 99 99 45 52 54 50 57 45 98 55 100 102 45 97 51 53 53 101 52 57 54 54 99 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.522: INFO: Pod "webserver-deployment-6676bcd6d4-lhj6m" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-lhj6m webserver-deployment-6676bcd6d4- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-6676bcd6d4-lhj6m bfcf54f2-e2f2-4f08-be6e-297af47168a1 1113652 0 2020-08-26 17:36:45 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d5909727-20cc-4629-b7df-a355e4966c65 0xc002ae1dc7 0xc002ae1dc8}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 57 48 57 55 50 55 45 50 48 99 99 45 52 54 50 57 45 98 55 100 102 45 97 51 53 53 101 52 57 54 54 99 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.523: INFO: Pod "webserver-deployment-6676bcd6d4-mtrxz" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mtrxz webserver-deployment-6676bcd6d4- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-6676bcd6d4-mtrxz 04859622-0718-4340-ad06-66222d900dab 1113699 0 2020-08-26 17:36:46 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d5909727-20cc-4629-b7df-a355e4966c65 0xc002ae1f77 0xc002ae1f78}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 57 48 57 55 50 55 45 50 48 99 99 45 52 54 50 57 45 98 55 100 102 45 97 51 53 53 101 52 57 54 54 99 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.523: INFO: Pod "webserver-deployment-6676bcd6d4-pwz2b" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pwz2b webserver-deployment-6676bcd6d4- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-6676bcd6d4-pwz2b 3682a32c-1c11-4257-a7d8-cd0b13dd62eb 1113696 0 2020-08-26 17:36:46 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d5909727-20cc-4629-b7df-a355e4966c65 0xc0052d0167 0xc0052d0168}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 57 48 57 55 50 55 45 50 48 99 99 45 52 54 50 57 45 98 55 100 102 45 97 51 53 53 101 52 57 54 54 99 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.523: INFO: Pod "webserver-deployment-6676bcd6d4-txk8f" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-txk8f webserver-deployment-6676bcd6d4- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-6676bcd6d4-txk8f 3a18bee5-8797-4184-9f64-3765eb1bb76b 1113582 0 2020-08-26 17:36:41 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d5909727-20cc-4629-b7df-a355e4966c65 0xc0052d0337 0xc0052d0338}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:41 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 57 48 57 55 50 55 45 50 48 99 99 45 52 54 50 57 45 98 55 100 102 45 97 51 53 53 101 52 57 54 54 99 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:42 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-26 17:36:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.523: INFO: Pod "webserver-deployment-6676bcd6d4-v94qh" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-v94qh webserver-deployment-6676bcd6d4- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-6676bcd6d4-v94qh 9b87fa58-1d1a-4e94-bbf7-044e66c90b45 1113584 0 2020-08-26 17:36:42 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d5909727-20cc-4629-b7df-a355e4966c65 0xc0052d04e7 0xc0052d04e8}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 57 48 57 55 50 55 45 50 48 99 99 45 52 54 50 57 45 98 55 100 102 45 97 51 53 53 101 52 57 54 54 99 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-26 17:36:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.524: INFO: Pod "webserver-deployment-6676bcd6d4-vcv45" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-vcv45 webserver-deployment-6676bcd6d4- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-6676bcd6d4-vcv45 c6d585a0-ddd4-4544-ba89-258293b87ff8 1113565 0 2020-08-26 17:36:41 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d5909727-20cc-4629-b7df-a355e4966c65 0xc0052d0697 0xc0052d0698}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:41 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 53 57 48 57 55 50 55 45 50 48 99 99 45 52 54 50 57 45 98 55 100 102 45 97 51 53 53 101 52 57 54 54 99 54 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:42 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-26 17:36:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.524: INFO: Pod "webserver-deployment-84855cf797-42vdq" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-42vdq webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-42vdq 4331d27f-3787-4975-af08-89597fcdb6d2 1113680 0 2020-08-26 17:36:45 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0052d0847 0xc0052d0848}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.524: INFO: Pod "webserver-deployment-84855cf797-4t8fw" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-4t8fw webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-4t8fw e49969f0-66a1-4b78-91a3-e6a2b2daa42c 1113462 0 2020-08-26 17:36:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0052d09d7 0xc0052d09d8}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:34 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.21,StartTime:2020-08-26 17:36:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 17:36:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://660d577f540f9682b5685c8bbf33e1dd45265727043a47177b985d2328dd9014,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.21,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.525: INFO: Pod "webserver-deployment-84855cf797-658qw" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-658qw webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-658qw a81935de-e986-44fa-90d2-acffa3c71fb1 1113521 0 2020-08-26 17:36:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0052d0b97 0xc0052d0b98}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 57 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.98,StartTime:2020-08-26 17:36:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 17:36:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8a066742f22925c46e975e4f30e7f26fcb7b109f12daaf93f56448dcad0ca04f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.98,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.525: INFO: Pod "webserver-deployment-84855cf797-6mgv9" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-6mgv9 webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-6mgv9 cf574f9b-36c7-452c-b3ed-a44e9bd9ade6 1113454 0 2020-08-26 17:36:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0052d0d47 0xc0052d0d48}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:32 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.20,StartTime:2020-08-26 17:36:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 17:36:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4ed2cacfbc5c2d9c5448a9e94e012fc307ad7dd3ed4e781eedd513be60d0df28,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.525: INFO: Pod "webserver-deployment-84855cf797-89vnz" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-89vnz webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-89vnz f2db5539-3e39-492e-aa29-4b19fda78f8e 1113511 0 2020-08-26 17:36:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0052d0ef7 0xc0052d0ef8}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.95,StartTime:2020-08-26 17:36:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 17:36:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bf3ed04bbcdd63c54aec03e747feeea077736db3515484aac5e0582c2348d551,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.95,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.525: INFO: Pod "webserver-deployment-84855cf797-dvprb" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-dvprb webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-dvprb dc56b368-983b-4826-ac81-8ad712291a17 1113673 0 2020-08-26 17:36:45 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0052d10b7 0xc0052d10b8}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.525: INFO: Pod "webserver-deployment-84855cf797-hnmdg" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-hnmdg webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-hnmdg 6e1f6267-0be5-42b6-a300-399e85d9a784 1113706 0 2020-08-26 17:36:46 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0052d1247 0xc0052d1248}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.526: INFO: Pod "webserver-deployment-84855cf797-ksvmn" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-ksvmn webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-ksvmn b12e67b6-c3ac-4007-828e-a99e93c7386e 1113688 0 2020-08-26 17:36:45 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0052d13e7 0xc0052d13e8}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.526: INFO: Pod "webserver-deployment-84855cf797-ktvhg" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-ktvhg webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-ktvhg 13e66c99-6ffa-42a6-8272-963ea53143f9 1113506 0 2020-08-26 17:36:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0052d1577 0xc0052d1578}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 57 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.97,StartTime:2020-08-26 17:36:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 17:36:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bc27e698de50ad2afb9b7c6c23a8c623af416b50a54a31dfa7eb92024846c358,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.97,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.526: INFO: Pod "webserver-deployment-84855cf797-ll4wl" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-ll4wl webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-ll4wl 57dc9d27-8904-477c-8ee6-eb282643aa79 1113662 0 2020-08-26 17:36:45 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0052d1727 0xc0052d1728}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.527: INFO: Pod "webserver-deployment-84855cf797-ndzgz" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-ndzgz webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-ndzgz 03c9a028-7bf4-4114-9121-16afc7896a0d 1113718 0 2020-08-26 17:36:46 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0052d18b7 0xc0052d18b8}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.527: INFO: Pod "webserver-deployment-84855cf797-nrrq2" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-nrrq2 webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-nrrq2 682ca8de-7a73-45a6-ae31-f3bc224d5207 1113679 0 2020-08-26 17:36:45 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0052d1a47 0xc0052d1a48}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.528: INFO: Pod "webserver-deployment-84855cf797-qh2lt" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-qh2lt webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-qh2lt 1de7992c-1169-418c-b66c-e96b765e1886 1113700 0 2020-08-26 17:36:46 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0052d1bd7 0xc0052d1bd8}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.528: INFO: Pod "webserver-deployment-84855cf797-rngnd" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-rngnd webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-rngnd 71d442a5-01f4-4c7a-8de1-134ac092884a 1113480 0 2020-08-26 17:36:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0052d1d67 0xc0052d1d68}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:37 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 50 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.22,StartTime:2020-08-26 17:36:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 17:36:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://25033a7246c3ba41e464faddb64baa295722b27158534a9bc2fd063ea145f2b6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.22,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.528: INFO: Pod "webserver-deployment-84855cf797-rzdf8" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-rzdf8 webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-rzdf8 f9ac16dc-7c8b-4e8c-a8e5-262d59c7074f 1113717 0 2020-08-26 17:36:46 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0052d1f17 0xc0052d1f18}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.528: INFO: Pod "webserver-deployment-84855cf797-srh79" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-srh79 webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-srh79 9fb722b7-ed51-4255-944a-ccd90a4000a4 1113738 0 2020-08-26 17:36:46 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0054be0c7 0xc0054be0c8}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.529: INFO: Pod "webserver-deployment-84855cf797-tg8fm" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-tg8fm webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-tg8fm 6090e8b4-f757-4187-9140-258b09bd9db8 1113494 0 2020-08-26 17:36:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0054be257 0xc0054be258}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 57 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.94,StartTime:2020-08-26 17:36:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 17:36:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9a8b7e287327ad879eb4897ad5a184d40ccd2772ce471743a4e45dc8e6083acb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.529: INFO: Pod "webserver-deployment-84855cf797-tw8q4" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-tw8q4 webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-tw8q4 d44e8513-a558-442d-9c36-793eaa16f76b 1113514 0 2020-08-26 17:36:25 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0054be407 0xc0054be408}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:39 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 57 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.96,StartTime:2020-08-26 17:36:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-26 17:36:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ed0e65ee87bf849bbfc0fec6fbd7950eec4ecd087e70b0d7f0c529b89a9b1f3b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.529: INFO: Pod "webserver-deployment-84855cf797-vwcvp" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-vwcvp webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-vwcvp b7bd673d-5b50-4216-ae8d-95451422132c 1113643 0 2020-08-26 17:36:45 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0054be5b7 0xc0054be5b8}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Aug 26 17:36:51.530: INFO: Pod "webserver-deployment-84855cf797-whlz6" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-whlz6 webserver-deployment-84855cf797- deployment-2354 /api/v1/namespaces/deployment-2354/pods/webserver-deployment-84855cf797-whlz6 47c2bdc1-4b47-404e-89f0-b1bb240f0c4e 1113667 0 2020-08-26 17:36:45 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 c262ce7b-ec7e-45c7-8493-7c87893adc01 0xc0054be757 0xc0054be758}] []  [{kube-controller-manager Update v1 2020-08-26 17:36:45 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 50 54 50 99 101 55 98 45 101 99 55 101 45 52 53 99 55 45 56 52 57 51 45 55 99 56 55 56 57 51 97 100 99 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-26 17:36:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-26 17:36:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-26 17:36:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:36:51.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2354" for this suite.

• [SLOW TEST:27.543 seconds]
[sig-apps] Deployment
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":192,"skipped":3270,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:36:52.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:37:54.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-544" for this suite.

• [SLOW TEST:62.639 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":193,"skipped":3284,"failed":0}
S
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:37:54.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 26 17:37:55.019: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
Aug 26 17:37:55.585: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 26 17:37:58.535: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060275, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060275, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060275, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060275, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:38:00.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060275, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060275, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060275, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060275, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:38:02.978: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060275, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060275, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060275, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060275, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:38:04.593: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060275, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060275, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060275, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060275, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:38:07.185: INFO: Waited 638.882181ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:38:07.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-8693" for this suite.

• [SLOW TEST:12.927 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":194,"skipped":3285,"failed":0}
SSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:38:07.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should find a service from listing all namespaces [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching services
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:38:08.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7265" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":195,"skipped":3292,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:38:08.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7340.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7340.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7340.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7340.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7340.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7340.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7340.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7340.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7340.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7340.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 17:38:23.024: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:23.028: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:23.031: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:23.034: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:23.266: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: Get https://172.30.12.66:44383/api/v1/namespaces/dns-7340/pods/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e/proxy/results/wheezy_udp@PodARecord: stream error: stream ID 11581; INTERNAL_ERROR
Aug 26 17:38:23.275: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:23.278: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:23.281: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:23.283: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:23.289: INFO: Lookups using dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7340.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7340.svc.cluster.local wheezy_udp@PodARecord jessie_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local jessie_udp@dns-test-service-2.dns-7340.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7340.svc.cluster.local]

Aug 26 17:38:28.965: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:28.968: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:29.005: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:29.222: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:29.646: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:30.670: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:30.674: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:30.677: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:30.681: INFO: Lookups using dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7340.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7340.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local jessie_udp@dns-test-service-2.dns-7340.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7340.svc.cluster.local]

Aug 26 17:38:34.625: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:35.039: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:35.059: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:35.440: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:35.449: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:35.452: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:35.454: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:35.457: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:35.462: INFO: Lookups using dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7340.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7340.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local jessie_udp@dns-test-service-2.dns-7340.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7340.svc.cluster.local]

Aug 26 17:38:38.294: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:38.298: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:38.302: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:38.304: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:38.312: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:38.315: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:38.318: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:38.320: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:38.326: INFO: Lookups using dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7340.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7340.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local jessie_udp@dns-test-service-2.dns-7340.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7340.svc.cluster.local]

Aug 26 17:38:43.586: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:43.590: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:43.646: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:43.650: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:43.837: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:43.840: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:43.842: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:43.844: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7340.svc.cluster.local from pod dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e: the server could not find the requested resource (get pods dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e)
Aug 26 17:38:43.849: INFO: Lookups using dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7340.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7340.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7340.svc.cluster.local jessie_udp@dns-test-service-2.dns-7340.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7340.svc.cluster.local]

Aug 26 17:38:48.346: INFO: DNS probes using dns-7340/dns-test-4e37be71-1c7f-4a48-b71b-58c345ebec5e succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:38:49.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7340" for this suite.

• [SLOW TEST:41.248 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":196,"skipped":3313,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:38:49.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
Aug 26 17:38:49.788: INFO: Waiting up to 5m0s for pod "client-containers-7f1e1180-1d03-46b3-af41-e95bdec2d330" in namespace "containers-1516" to be "Succeeded or Failed"
Aug 26 17:38:49.805: INFO: Pod "client-containers-7f1e1180-1d03-46b3-af41-e95bdec2d330": Phase="Pending", Reason="", readiness=false. Elapsed: 17.070633ms
Aug 26 17:38:52.020: INFO: Pod "client-containers-7f1e1180-1d03-46b3-af41-e95bdec2d330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232228478s
Aug 26 17:38:54.023: INFO: Pod "client-containers-7f1e1180-1d03-46b3-af41-e95bdec2d330": Phase="Pending", Reason="", readiness=false. Elapsed: 4.235685144s
Aug 26 17:38:56.061: INFO: Pod "client-containers-7f1e1180-1d03-46b3-af41-e95bdec2d330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.273027068s
STEP: Saw pod success
Aug 26 17:38:56.061: INFO: Pod "client-containers-7f1e1180-1d03-46b3-af41-e95bdec2d330" satisfied condition "Succeeded or Failed"
Aug 26 17:38:56.064: INFO: Trying to get logs from node kali-worker2 pod client-containers-7f1e1180-1d03-46b3-af41-e95bdec2d330 container test-container: 
STEP: delete the pod
Aug 26 17:38:56.148: INFO: Waiting for pod client-containers-7f1e1180-1d03-46b3-af41-e95bdec2d330 to disappear
Aug 26 17:38:56.343: INFO: Pod client-containers-7f1e1180-1d03-46b3-af41-e95bdec2d330 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:38:56.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1516" for this suite.

• [SLOW TEST:6.966 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3376,"failed":0}
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:38:56.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 26 17:39:01.745: INFO: Successfully updated pod "labelsupdated1fd7f7b-77c3-43e3-b122-a2ec5a95219a"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:39:03.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7547" for this suite.

• [SLOW TEST:7.398 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3376,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:39:03.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-9f427bfa-0380-411b-ab89-79367ac95c3c
STEP: Creating a pod to test consume secrets
Aug 26 17:39:04.019: INFO: Waiting up to 5m0s for pod "pod-secrets-2d83874d-f3c0-4705-8cfd-442be3cc754c" in namespace "secrets-3209" to be "Succeeded or Failed"
Aug 26 17:39:04.053: INFO: Pod "pod-secrets-2d83874d-f3c0-4705-8cfd-442be3cc754c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.11416ms
Aug 26 17:39:06.338: INFO: Pod "pod-secrets-2d83874d-f3c0-4705-8cfd-442be3cc754c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318413354s
Aug 26 17:39:08.517: INFO: Pod "pod-secrets-2d83874d-f3c0-4705-8cfd-442be3cc754c": Phase="Running", Reason="", readiness=true. Elapsed: 4.497777614s
Aug 26 17:39:10.522: INFO: Pod "pod-secrets-2d83874d-f3c0-4705-8cfd-442be3cc754c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.502648439s
STEP: Saw pod success
Aug 26 17:39:10.522: INFO: Pod "pod-secrets-2d83874d-f3c0-4705-8cfd-442be3cc754c" satisfied condition "Succeeded or Failed"
Aug 26 17:39:10.525: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-2d83874d-f3c0-4705-8cfd-442be3cc754c container secret-volume-test: 
STEP: delete the pod
Aug 26 17:39:10.835: INFO: Waiting for pod pod-secrets-2d83874d-f3c0-4705-8cfd-442be3cc754c to disappear
Aug 26 17:39:10.852: INFO: Pod pod-secrets-2d83874d-f3c0-4705-8cfd-442be3cc754c no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:39:10.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3209" for this suite.

• [SLOW TEST:7.128 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3383,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:39:10.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-175d08c9-307c-4b88-ac5b-6d66d2bf66fa in namespace container-probe-5081
Aug 26 17:39:15.662: INFO: Started pod test-webserver-175d08c9-307c-4b88-ac5b-6d66d2bf66fa in namespace container-probe-5081
STEP: checking the pod's current state and verifying that restartCount is present
Aug 26 17:39:15.666: INFO: Initial restart count of pod test-webserver-175d08c9-307c-4b88-ac5b-6d66d2bf66fa is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:43:16.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5081" for this suite.

• [SLOW TEST:246.672 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3415,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:43:17.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-2236
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 26 17:43:18.780: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 26 17:43:19.615: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 17:43:21.835: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 17:43:24.143: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 17:43:25.670: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 17:43:27.620: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 17:43:29.618: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 17:43:31.618: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 17:43:33.619: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 17:43:35.698: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 17:43:37.627: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 17:43:39.619: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 17:43:41.618: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 26 17:43:41.660: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 26 17:43:43.802: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 26 17:43:50.100: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.42:8080/dial?request=hostname&protocol=udp&host=10.244.1.41&port=8081&tries=1'] Namespace:pod-network-test-2236 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 17:43:50.100: INFO: >>> kubeConfig: /root/.kube/config
I0826 17:43:50.131474       7 log.go:172] (0xc0028bbb80) (0xc001135ae0) Create stream
I0826 17:43:50.131525       7 log.go:172] (0xc0028bbb80) (0xc001135ae0) Stream added, broadcasting: 1
I0826 17:43:50.133466       7 log.go:172] (0xc0028bbb80) Reply frame received for 1
I0826 17:43:50.133510       7 log.go:172] (0xc0028bbb80) (0xc00257a500) Create stream
I0826 17:43:50.133520       7 log.go:172] (0xc0028bbb80) (0xc00257a500) Stream added, broadcasting: 3
I0826 17:43:50.134441       7 log.go:172] (0xc0028bbb80) Reply frame received for 3
I0826 17:43:50.134501       7 log.go:172] (0xc0028bbb80) (0xc001135cc0) Create stream
I0826 17:43:50.134526       7 log.go:172] (0xc0028bbb80) (0xc001135cc0) Stream added, broadcasting: 5
I0826 17:43:50.135452       7 log.go:172] (0xc0028bbb80) Reply frame received for 5
I0826 17:43:50.213377       7 log.go:172] (0xc0028bbb80) Data frame received for 3
I0826 17:43:50.213410       7 log.go:172] (0xc00257a500) (3) Data frame handling
I0826 17:43:50.213445       7 log.go:172] (0xc00257a500) (3) Data frame sent
I0826 17:43:50.213633       7 log.go:172] (0xc0028bbb80) Data frame received for 5
I0826 17:43:50.213665       7 log.go:172] (0xc001135cc0) (5) Data frame handling
I0826 17:43:50.213688       7 log.go:172] (0xc0028bbb80) Data frame received for 3
I0826 17:43:50.213705       7 log.go:172] (0xc00257a500) (3) Data frame handling
I0826 17:43:50.215318       7 log.go:172] (0xc0028bbb80) Data frame received for 1
I0826 17:43:50.215336       7 log.go:172] (0xc001135ae0) (1) Data frame handling
I0826 17:43:50.215352       7 log.go:172] (0xc001135ae0) (1) Data frame sent
I0826 17:43:50.215374       7 log.go:172] (0xc0028bbb80) (0xc001135ae0) Stream removed, broadcasting: 1
I0826 17:43:50.215394       7 log.go:172] (0xc0028bbb80) Go away received
I0826 17:43:50.215501       7 log.go:172] (0xc0028bbb80) (0xc001135ae0) Stream removed, broadcasting: 1
I0826 17:43:50.215518       7 log.go:172] (0xc0028bbb80) (0xc00257a500) Stream removed, broadcasting: 3
I0826 17:43:50.215528       7 log.go:172] (0xc0028bbb80) (0xc001135cc0) Stream removed, broadcasting: 5
Aug 26 17:43:50.215: INFO: Waiting for responses: map[]
Aug 26 17:43:50.218: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.42:8080/dial?request=hostname&protocol=udp&host=10.244.2.115&port=8081&tries=1'] Namespace:pod-network-test-2236 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 17:43:50.218: INFO: >>> kubeConfig: /root/.kube/config
I0826 17:43:50.247781       7 log.go:172] (0xc00155a420) (0xc0030c8b40) Create stream
I0826 17:43:50.247814       7 log.go:172] (0xc00155a420) (0xc0030c8b40) Stream added, broadcasting: 1
I0826 17:43:50.254566       7 log.go:172] (0xc00155a420) Reply frame received for 1
I0826 17:43:50.254608       7 log.go:172] (0xc00155a420) (0xc001135f40) Create stream
I0826 17:43:50.254622       7 log.go:172] (0xc00155a420) (0xc001135f40) Stream added, broadcasting: 3
I0826 17:43:50.256025       7 log.go:172] (0xc00155a420) Reply frame received for 3
I0826 17:43:50.256052       7 log.go:172] (0xc00155a420) (0xc0030c8be0) Create stream
I0826 17:43:50.256060       7 log.go:172] (0xc00155a420) (0xc0030c8be0) Stream added, broadcasting: 5
I0826 17:43:50.257182       7 log.go:172] (0xc00155a420) Reply frame received for 5
I0826 17:43:50.326470       7 log.go:172] (0xc00155a420) Data frame received for 5
I0826 17:43:50.326539       7 log.go:172] (0xc0030c8be0) (5) Data frame handling
I0826 17:43:50.326562       7 log.go:172] (0xc00155a420) Data frame received for 3
I0826 17:43:50.326575       7 log.go:172] (0xc001135f40) (3) Data frame handling
I0826 17:43:50.326589       7 log.go:172] (0xc001135f40) (3) Data frame sent
I0826 17:43:50.326598       7 log.go:172] (0xc00155a420) Data frame received for 3
I0826 17:43:50.326607       7 log.go:172] (0xc001135f40) (3) Data frame handling
I0826 17:43:50.326616       7 log.go:172] (0xc00155a420) Data frame received for 1
I0826 17:43:50.326623       7 log.go:172] (0xc0030c8b40) (1) Data frame handling
I0826 17:43:50.326633       7 log.go:172] (0xc0030c8b40) (1) Data frame sent
I0826 17:43:50.326652       7 log.go:172] (0xc00155a420) (0xc0030c8b40) Stream removed, broadcasting: 1
I0826 17:43:50.326664       7 log.go:172] (0xc00155a420) Go away received
I0826 17:43:50.326737       7 log.go:172] (0xc00155a420) (0xc0030c8b40) Stream removed, broadcasting: 1
I0826 17:43:50.326761       7 log.go:172] (0xc00155a420) (0xc001135f40) Stream removed, broadcasting: 3
I0826 17:43:50.326772       7 log.go:172] (0xc00155a420) (0xc0030c8be0) Stream removed, broadcasting: 5
Aug 26 17:43:50.326: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:43:50.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2236" for this suite.

• [SLOW TEST:32.729 seconds]
[sig-network] Networking
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3423,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:43:50.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-5583
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-5583
I0826 17:43:50.693808       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5583, replica count: 2
I0826 17:43:53.744278       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 17:43:56.744497       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 17:43:59.744868       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 17:44:02.745109       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 26 17:44:02.745: INFO: Creating new exec pod
Aug 26 17:44:12.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-5583 execpodd7wpr -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 26 17:44:15.752: INFO: stderr: "I0826 17:44:15.639291    3756 log.go:172] (0xc00003ac60) (0xc00060c140) Create stream\nI0826 17:44:15.639346    3756 log.go:172] (0xc00003ac60) (0xc00060c140) Stream added, broadcasting: 1\nI0826 17:44:15.642202    3756 log.go:172] (0xc00003ac60) Reply frame received for 1\nI0826 17:44:15.642269    3756 log.go:172] (0xc00003ac60) (0xc00060c1e0) Create stream\nI0826 17:44:15.642287    3756 log.go:172] (0xc00003ac60) (0xc00060c1e0) Stream added, broadcasting: 3\nI0826 17:44:15.643130    3756 log.go:172] (0xc00003ac60) Reply frame received for 3\nI0826 17:44:15.643152    3756 log.go:172] (0xc00003ac60) (0xc0004d4000) Create stream\nI0826 17:44:15.643158    3756 log.go:172] (0xc00003ac60) (0xc0004d4000) Stream added, broadcasting: 5\nI0826 17:44:15.644034    3756 log.go:172] (0xc00003ac60) Reply frame received for 5\nI0826 17:44:15.743035    3756 log.go:172] (0xc00003ac60) Data frame received for 5\nI0826 17:44:15.743067    3756 log.go:172] (0xc0004d4000) (5) Data frame handling\nI0826 17:44:15.743078    3756 log.go:172] (0xc0004d4000) (5) Data frame sent\nI0826 17:44:15.743085    3756 log.go:172] (0xc00003ac60) Data frame received for 5\nI0826 17:44:15.743090    3756 log.go:172] (0xc0004d4000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0826 17:44:15.743105    3756 log.go:172] (0xc00003ac60) Data frame received for 3\nI0826 17:44:15.743110    3756 log.go:172] (0xc00060c1e0) (3) Data frame handling\nI0826 17:44:15.744257    3756 log.go:172] (0xc00003ac60) Data frame received for 1\nI0826 17:44:15.744287    3756 log.go:172] (0xc00060c140) (1) Data frame handling\nI0826 17:44:15.744302    3756 log.go:172] (0xc00060c140) (1) Data frame sent\nI0826 17:44:15.744314    3756 log.go:172] (0xc00003ac60) (0xc00060c140) Stream removed, broadcasting: 1\nI0826 17:44:15.744324    3756 log.go:172] (0xc00003ac60) Go away received\nI0826 17:44:15.744647    3756 log.go:172] (0xc00003ac60) (0xc00060c140) Stream removed, broadcasting: 1\nI0826 17:44:15.744659    3756 log.go:172] (0xc00003ac60) (0xc00060c1e0) Stream removed, broadcasting: 3\nI0826 17:44:15.744664    3756 log.go:172] (0xc00003ac60) (0xc0004d4000) Stream removed, broadcasting: 5\n"
Aug 26 17:44:15.752: INFO: stdout: ""
Aug 26 17:44:15.753: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-5583 execpodd7wpr -- /bin/sh -x -c nc -zv -t -w 2 10.96.66.77 80'
Aug 26 17:44:16.062: INFO: stderr: "I0826 17:44:15.967192    3789 log.go:172] (0xc000bb2c60) (0xc000ba6460) Create stream\nI0826 17:44:15.967241    3789 log.go:172] (0xc000bb2c60) (0xc000ba6460) Stream added, broadcasting: 1\nI0826 17:44:15.969751    3789 log.go:172] (0xc000bb2c60) Reply frame received for 1\nI0826 17:44:15.969803    3789 log.go:172] (0xc000bb2c60) (0xc000be40a0) Create stream\nI0826 17:44:15.969824    3789 log.go:172] (0xc000bb2c60) (0xc000be40a0) Stream added, broadcasting: 3\nI0826 17:44:15.970579    3789 log.go:172] (0xc000bb2c60) Reply frame received for 3\nI0826 17:44:15.970608    3789 log.go:172] (0xc000bb2c60) (0xc000ba6500) Create stream\nI0826 17:44:15.970619    3789 log.go:172] (0xc000bb2c60) (0xc000ba6500) Stream added, broadcasting: 5\nI0826 17:44:15.971360    3789 log.go:172] (0xc000bb2c60) Reply frame received for 5\nI0826 17:44:16.046169    3789 log.go:172] (0xc000bb2c60) Data frame received for 3\nI0826 17:44:16.046207    3789 log.go:172] (0xc000be40a0) (3) Data frame handling\nI0826 17:44:16.046225    3789 log.go:172] (0xc000bb2c60) Data frame received for 5\nI0826 17:44:16.046232    3789 log.go:172] (0xc000ba6500) (5) Data frame handling\nI0826 17:44:16.046240    3789 log.go:172] (0xc000ba6500) (5) Data frame sent\nI0826 17:44:16.046247    3789 log.go:172] (0xc000bb2c60) Data frame received for 5\nI0826 17:44:16.046252    3789 log.go:172] (0xc000ba6500) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.66.77 80\nConnection to 10.96.66.77 80 port [tcp/http] succeeded!\nI0826 17:44:16.047297    3789 log.go:172] (0xc000bb2c60) Data frame received for 1\nI0826 17:44:16.047325    3789 log.go:172] (0xc000ba6460) (1) Data frame handling\nI0826 17:44:16.047338    3789 log.go:172] (0xc000ba6460) (1) Data frame sent\nI0826 17:44:16.047348    3789 log.go:172] (0xc000bb2c60) (0xc000ba6460) Stream removed, broadcasting: 1\nI0826 17:44:16.047358    3789 log.go:172] (0xc000bb2c60) Go away received\nI0826 17:44:16.047859    3789 log.go:172] (0xc000bb2c60) (0xc000ba6460) Stream removed, broadcasting: 1\nI0826 17:44:16.047895    3789 log.go:172] (0xc000bb2c60) (0xc000be40a0) Stream removed, broadcasting: 3\nI0826 17:44:16.047913    3789 log.go:172] (0xc000bb2c60) (0xc000ba6500) Stream removed, broadcasting: 5\n"
Aug 26 17:44:16.062: INFO: stdout: ""
Aug 26 17:44:16.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-5583 execpodd7wpr -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 32254'
Aug 26 17:44:16.303: INFO: stderr: "I0826 17:44:16.193335    3810 log.go:172] (0xc000928000) (0xc00098a000) Create stream\nI0826 17:44:16.193395    3810 log.go:172] (0xc000928000) (0xc00098a000) Stream added, broadcasting: 1\nI0826 17:44:16.195056    3810 log.go:172] (0xc000928000) Reply frame received for 1\nI0826 17:44:16.195096    3810 log.go:172] (0xc000928000) (0xc00098a0a0) Create stream\nI0826 17:44:16.195104    3810 log.go:172] (0xc000928000) (0xc00098a0a0) Stream added, broadcasting: 3\nI0826 17:44:16.196025    3810 log.go:172] (0xc000928000) Reply frame received for 3\nI0826 17:44:16.196050    3810 log.go:172] (0xc000928000) (0xc0004e0be0) Create stream\nI0826 17:44:16.196057    3810 log.go:172] (0xc000928000) (0xc0004e0be0) Stream added, broadcasting: 5\nI0826 17:44:16.196961    3810 log.go:172] (0xc000928000) Reply frame received for 5\nI0826 17:44:16.292578    3810 log.go:172] (0xc000928000) Data frame received for 5\nI0826 17:44:16.292670    3810 log.go:172] (0xc0004e0be0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 32254\nConnection to 172.18.0.15 32254 port [tcp/32254] succeeded!\nI0826 17:44:16.292698    3810 log.go:172] (0xc000928000) Data frame received for 3\nI0826 17:44:16.292718    3810 log.go:172] (0xc00098a0a0) (3) Data frame handling\nI0826 17:44:16.292895    3810 log.go:172] (0xc0004e0be0) (5) Data frame sent\nI0826 17:44:16.292922    3810 log.go:172] (0xc000928000) Data frame received for 5\nI0826 17:44:16.292937    3810 log.go:172] (0xc0004e0be0) (5) Data frame handling\nI0826 17:44:16.294168    3810 log.go:172] (0xc000928000) Data frame received for 1\nI0826 17:44:16.294195    3810 log.go:172] (0xc00098a000) (1) Data frame handling\nI0826 17:44:16.294208    3810 log.go:172] (0xc00098a000) (1) Data frame sent\nI0826 17:44:16.294222    3810 log.go:172] (0xc000928000) (0xc00098a000) Stream removed, broadcasting: 1\nI0826 17:44:16.294241    3810 log.go:172] (0xc000928000) Go away received\nI0826 17:44:16.294566    3810 log.go:172] (0xc000928000) (0xc00098a000) Stream removed, broadcasting: 1\nI0826 17:44:16.294585    3810 log.go:172] (0xc000928000) (0xc00098a0a0) Stream removed, broadcasting: 3\nI0826 17:44:16.294598    3810 log.go:172] (0xc000928000) (0xc0004e0be0) Stream removed, broadcasting: 5\n"
Aug 26 17:44:16.303: INFO: stdout: ""
Aug 26 17:44:16.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-5583 execpodd7wpr -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32254'
Aug 26 17:44:16.532: INFO: stderr: "I0826 17:44:16.441330    3832 log.go:172] (0xc0008a6790) (0xc0009e4320) Create stream\nI0826 17:44:16.441413    3832 log.go:172] (0xc0008a6790) (0xc0009e4320) Stream added, broadcasting: 1\nI0826 17:44:16.443851    3832 log.go:172] (0xc0008a6790) Reply frame received for 1\nI0826 17:44:16.443909    3832 log.go:172] (0xc0008a6790) (0xc000a44000) Create stream\nI0826 17:44:16.443938    3832 log.go:172] (0xc0008a6790) (0xc000a44000) Stream added, broadcasting: 3\nI0826 17:44:16.444982    3832 log.go:172] (0xc0008a6790) Reply frame received for 3\nI0826 17:44:16.445039    3832 log.go:172] (0xc0008a6790) (0xc000a440a0) Create stream\nI0826 17:44:16.445061    3832 log.go:172] (0xc0008a6790) (0xc000a440a0) Stream added, broadcasting: 5\nI0826 17:44:16.446186    3832 log.go:172] (0xc0008a6790) Reply frame received for 5\nI0826 17:44:16.521827    3832 log.go:172] (0xc0008a6790) Data frame received for 5\nI0826 17:44:16.521869    3832 log.go:172] (0xc000a440a0) (5) Data frame handling\nI0826 17:44:16.521881    3832 log.go:172] (0xc000a440a0) (5) Data frame sent\nI0826 17:44:16.521886    3832 log.go:172] (0xc0008a6790) Data frame received for 5\nI0826 17:44:16.521891    3832 log.go:172] (0xc000a440a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 32254\nConnection to 172.18.0.13 32254 port [tcp/32254] succeeded!\nI0826 17:44:16.521908    3832 log.go:172] (0xc0008a6790) Data frame received for 3\nI0826 17:44:16.521912    3832 log.go:172] (0xc000a44000) (3) Data frame handling\nI0826 17:44:16.523126    3832 log.go:172] (0xc0008a6790) Data frame received for 1\nI0826 17:44:16.523151    3832 log.go:172] (0xc0009e4320) (1) Data frame handling\nI0826 17:44:16.523169    3832 log.go:172] (0xc0009e4320) (1) Data frame sent\nI0826 17:44:16.523193    3832 log.go:172] (0xc0008a6790) (0xc0009e4320) Stream removed, broadcasting: 1\nI0826 17:44:16.523304    3832 log.go:172] (0xc0008a6790) Go away received\nI0826 17:44:16.523534    3832 log.go:172] (0xc0008a6790) (0xc0009e4320) Stream removed, broadcasting: 1\nI0826 17:44:16.523551    3832 log.go:172] (0xc0008a6790) (0xc000a44000) Stream removed, broadcasting: 3\nI0826 17:44:16.523560    3832 log.go:172] (0xc0008a6790) (0xc000a440a0) Stream removed, broadcasting: 5\n"
Aug 26 17:44:16.532: INFO: stdout: ""
Aug 26 17:44:16.532: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:44:16.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5583" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:26.261 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":202,"skipped":3429,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:44:16.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 26 17:44:17.277: INFO: Pod name wrapped-volume-race-8ad2f411-895a-4b90-aa91-35d448d8e1ed: Found 0 pods out of 5
Aug 26 17:44:22.818: INFO: Pod name wrapped-volume-race-8ad2f411-895a-4b90-aa91-35d448d8e1ed: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-8ad2f411-895a-4b90-aa91-35d448d8e1ed in namespace emptydir-wrapper-8941, will wait for the garbage collector to delete the pods
Aug 26 17:44:39.530: INFO: Deleting ReplicationController wrapped-volume-race-8ad2f411-895a-4b90-aa91-35d448d8e1ed took: 86.587292ms
Aug 26 17:44:40.131: INFO: Terminating ReplicationController wrapped-volume-race-8ad2f411-895a-4b90-aa91-35d448d8e1ed pods took: 600.295938ms
STEP: Creating RC which spawns configmap-volume pods
Aug 26 17:45:01.699: INFO: Pod name wrapped-volume-race-bc3e70d7-a688-4a52-ad1e-92b90273b94f: Found 0 pods out of 5
Aug 26 17:45:06.712: INFO: Pod name wrapped-volume-race-bc3e70d7-a688-4a52-ad1e-92b90273b94f: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-bc3e70d7-a688-4a52-ad1e-92b90273b94f in namespace emptydir-wrapper-8941, will wait for the garbage collector to delete the pods
Aug 26 17:45:34.109: INFO: Deleting ReplicationController wrapped-volume-race-bc3e70d7-a688-4a52-ad1e-92b90273b94f took: 140.863068ms
Aug 26 17:45:35.609: INFO: Terminating ReplicationController wrapped-volume-race-bc3e70d7-a688-4a52-ad1e-92b90273b94f pods took: 1.500259567s
STEP: Creating RC which spawns configmap-volume pods
Aug 26 17:45:52.107: INFO: Pod name wrapped-volume-race-c5b4568d-a89a-481d-b3d7-2580062c24e9: Found 0 pods out of 5
Aug 26 17:45:57.126: INFO: Pod name wrapped-volume-race-c5b4568d-a89a-481d-b3d7-2580062c24e9: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c5b4568d-a89a-481d-b3d7-2580062c24e9 in namespace emptydir-wrapper-8941, will wait for the garbage collector to delete the pods
Aug 26 17:46:13.455: INFO: Deleting ReplicationController wrapped-volume-race-c5b4568d-a89a-481d-b3d7-2580062c24e9 took: 99.11676ms
Aug 26 17:46:13.855: INFO: Terminating ReplicationController wrapped-volume-race-c5b4568d-a89a-481d-b3d7-2580062c24e9 pods took: 400.211848ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:46:30.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8941" for this suite.

• [SLOW TEST:133.481 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":203,"skipped":3449,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:46:30.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 26 17:46:30.208: INFO: Waiting up to 5m0s for pod "pod-7c3628ea-617e-49cd-9836-30850ae76ceb" in namespace "emptydir-488" to be "Succeeded or Failed"
Aug 26 17:46:30.217: INFO: Pod "pod-7c3628ea-617e-49cd-9836-30850ae76ceb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.646106ms
Aug 26 17:46:32.221: INFO: Pod "pod-7c3628ea-617e-49cd-9836-30850ae76ceb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012944013s
Aug 26 17:46:34.312: INFO: Pod "pod-7c3628ea-617e-49cd-9836-30850ae76ceb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104029014s
STEP: Saw pod success
Aug 26 17:46:34.313: INFO: Pod "pod-7c3628ea-617e-49cd-9836-30850ae76ceb" satisfied condition "Succeeded or Failed"
Aug 26 17:46:34.315: INFO: Trying to get logs from node kali-worker pod pod-7c3628ea-617e-49cd-9836-30850ae76ceb container test-container: 
STEP: delete the pod
Aug 26 17:46:34.613: INFO: Waiting for pod pod-7c3628ea-617e-49cd-9836-30850ae76ceb to disappear
Aug 26 17:46:34.648: INFO: Pod pod-7c3628ea-617e-49cd-9836-30850ae76ceb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:46:34.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-488" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":204,"skipped":3450,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:46:34.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 17:46:35.549: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 17:46:38.381: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060795, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060795, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060795, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060795, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:46:40.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060795, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060795, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060795, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060795, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:46:42.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060795, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060795, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060795, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060795, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 17:46:45.413: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:46:55.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5090" for this suite.
STEP: Destroying namespace "webhook-5090-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.913 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":205,"skipped":3476,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:46:55.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
Aug 26 17:46:55.711: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:46:55.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8043" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":206,"skipped":3491,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:46:55.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7530 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7530;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7530 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7530;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7530.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7530.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7530.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7530.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7530.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7530.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7530.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7530.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7530.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7530.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7530.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7530.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7530.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 251.6.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.6.251_udp@PTR;check="$$(dig +tcp +noall +answer +search 251.6.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.6.251_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7530 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7530;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7530 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7530;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7530.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7530.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7530.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7530.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7530.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7530.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7530.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7530.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7530.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7530.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7530.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7530.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7530.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 251.6.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.6.251_udp@PTR;check="$$(dig +tcp +noall +answer +search 251.6.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.6.251_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 26 17:47:06.505: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:06.508: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:06.511: INFO: Unable to read wheezy_udp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:06.514: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:06.517: INFO: Unable to read wheezy_udp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:06.520: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:06.523: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:06.526: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:06.544: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:06.547: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:06.549: INFO: Unable to read jessie_udp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:06.552: INFO: Unable to read jessie_tcp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:06.555: INFO: Unable to read jessie_udp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:06.558: INFO: Unable to read jessie_tcp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:06.561: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:06.563: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:06.580: INFO: Lookups using dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7530 wheezy_tcp@dns-test-service.dns-7530 wheezy_udp@dns-test-service.dns-7530.svc wheezy_tcp@dns-test-service.dns-7530.svc wheezy_udp@_http._tcp.dns-test-service.dns-7530.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7530.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7530 jessie_tcp@dns-test-service.dns-7530 jessie_udp@dns-test-service.dns-7530.svc jessie_tcp@dns-test-service.dns-7530.svc jessie_udp@_http._tcp.dns-test-service.dns-7530.svc jessie_tcp@_http._tcp.dns-test-service.dns-7530.svc]

Aug 26 17:47:11.585: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:11.589: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:11.592: INFO: Unable to read wheezy_udp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:11.595: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:11.598: INFO: Unable to read wheezy_udp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:11.602: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:11.605: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:11.607: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:11.630: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:11.633: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:11.635: INFO: Unable to read jessie_udp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:11.638: INFO: Unable to read jessie_tcp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:11.641: INFO: Unable to read jessie_udp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:11.644: INFO: Unable to read jessie_tcp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:11.646: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:11.649: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:11.667: INFO: Lookups using dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7530 wheezy_tcp@dns-test-service.dns-7530 wheezy_udp@dns-test-service.dns-7530.svc wheezy_tcp@dns-test-service.dns-7530.svc wheezy_udp@_http._tcp.dns-test-service.dns-7530.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7530.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7530 jessie_tcp@dns-test-service.dns-7530 jessie_udp@dns-test-service.dns-7530.svc jessie_tcp@dns-test-service.dns-7530.svc jessie_udp@_http._tcp.dns-test-service.dns-7530.svc jessie_tcp@_http._tcp.dns-test-service.dns-7530.svc]

Aug 26 17:47:16.585: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:16.588: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:16.590: INFO: Unable to read wheezy_udp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:16.593: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:16.596: INFO: Unable to read wheezy_udp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:16.599: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:16.602: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:16.604: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:16.625: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:16.628: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:16.631: INFO: Unable to read jessie_udp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:16.634: INFO: Unable to read jessie_tcp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:16.637: INFO: Unable to read jessie_udp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:16.639: INFO: Unable to read jessie_tcp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:16.642: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:16.668: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:16.687: INFO: Lookups using dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7530 wheezy_tcp@dns-test-service.dns-7530 wheezy_udp@dns-test-service.dns-7530.svc wheezy_tcp@dns-test-service.dns-7530.svc wheezy_udp@_http._tcp.dns-test-service.dns-7530.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7530.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7530 jessie_tcp@dns-test-service.dns-7530 jessie_udp@dns-test-service.dns-7530.svc jessie_tcp@dns-test-service.dns-7530.svc jessie_udp@_http._tcp.dns-test-service.dns-7530.svc jessie_tcp@_http._tcp.dns-test-service.dns-7530.svc]

Aug 26 17:47:21.584: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:21.587: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:21.591: INFO: Unable to read wheezy_udp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:21.606: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:21.610: INFO: Unable to read wheezy_udp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:21.613: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:21.617: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:21.621: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:21.669: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:21.671: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:21.678: INFO: Unable to read jessie_udp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:21.693: INFO: Unable to read jessie_tcp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:21.695: INFO: Unable to read jessie_udp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:21.697: INFO: Unable to read jessie_tcp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:21.699: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:21.701: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:21.714: INFO: Lookups using dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7530 wheezy_tcp@dns-test-service.dns-7530 wheezy_udp@dns-test-service.dns-7530.svc wheezy_tcp@dns-test-service.dns-7530.svc wheezy_udp@_http._tcp.dns-test-service.dns-7530.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7530.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7530 jessie_tcp@dns-test-service.dns-7530 jessie_udp@dns-test-service.dns-7530.svc jessie_tcp@dns-test-service.dns-7530.svc jessie_udp@_http._tcp.dns-test-service.dns-7530.svc jessie_tcp@_http._tcp.dns-test-service.dns-7530.svc]

Aug 26 17:47:26.586: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:26.590: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:26.593: INFO: Unable to read wheezy_udp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:26.597: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:26.600: INFO: Unable to read wheezy_udp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:26.602: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:26.605: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:26.608: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:26.629: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:26.631: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:26.634: INFO: Unable to read jessie_udp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:26.638: INFO: Unable to read jessie_tcp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:26.641: INFO: Unable to read jessie_udp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:26.644: INFO: Unable to read jessie_tcp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:26.647: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:26.649: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:26.665: INFO: Lookups using dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7530 wheezy_tcp@dns-test-service.dns-7530 wheezy_udp@dns-test-service.dns-7530.svc wheezy_tcp@dns-test-service.dns-7530.svc wheezy_udp@_http._tcp.dns-test-service.dns-7530.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7530.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7530 jessie_tcp@dns-test-service.dns-7530 jessie_udp@dns-test-service.dns-7530.svc jessie_tcp@dns-test-service.dns-7530.svc jessie_udp@_http._tcp.dns-test-service.dns-7530.svc jessie_tcp@_http._tcp.dns-test-service.dns-7530.svc]

Aug 26 17:47:31.603: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:31.605: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:31.628: INFO: Unable to read wheezy_udp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:31.630: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:31.633: INFO: Unable to read wheezy_udp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:31.635: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:31.637: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:31.640: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:31.658: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:31.680: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:31.683: INFO: Unable to read jessie_udp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:31.686: INFO: Unable to read jessie_tcp@dns-test-service.dns-7530 from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:31.689: INFO: Unable to read jessie_udp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:31.692: INFO: Unable to read jessie_tcp@dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:31.698: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:31.701: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7530.svc from pod dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619: the server could not find the requested resource (get pods dns-test-0282ebba-6a54-4b73-af34-236ee4531619)
Aug 26 17:47:31.720: INFO: Lookups using dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7530 wheezy_tcp@dns-test-service.dns-7530 wheezy_udp@dns-test-service.dns-7530.svc wheezy_tcp@dns-test-service.dns-7530.svc wheezy_udp@_http._tcp.dns-test-service.dns-7530.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7530.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7530 jessie_tcp@dns-test-service.dns-7530 jessie_udp@dns-test-service.dns-7530.svc jessie_tcp@dns-test-service.dns-7530.svc jessie_udp@_http._tcp.dns-test-service.dns-7530.svc jessie_tcp@_http._tcp.dns-test-service.dns-7530.svc]

Aug 26 17:47:36.670: INFO: DNS probes using dns-7530/dns-test-0282ebba-6a54-4b73-af34-236ee4531619 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:47:37.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7530" for this suite.

• [SLOW TEST:41.930 seconds]
[sig-network] DNS
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":207,"skipped":3497,"failed":0}
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:47:37.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
Aug 26 17:47:38.727: INFO: Waiting up to 5m0s for pod "client-containers-be7ede06-c6c0-4569-9ba1-024814d6894a" in namespace "containers-8675" to be "Succeeded or Failed"
Aug 26 17:47:38.807: INFO: Pod "client-containers-be7ede06-c6c0-4569-9ba1-024814d6894a": Phase="Pending", Reason="", readiness=false. Elapsed: 79.838172ms
Aug 26 17:47:40.811: INFO: Pod "client-containers-be7ede06-c6c0-4569-9ba1-024814d6894a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083822906s
Aug 26 17:47:42.837: INFO: Pod "client-containers-be7ede06-c6c0-4569-9ba1-024814d6894a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110449764s
Aug 26 17:47:44.841: INFO: Pod "client-containers-be7ede06-c6c0-4569-9ba1-024814d6894a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114649472s
STEP: Saw pod success
Aug 26 17:47:44.841: INFO: Pod "client-containers-be7ede06-c6c0-4569-9ba1-024814d6894a" satisfied condition "Succeeded or Failed"
Aug 26 17:47:44.844: INFO: Trying to get logs from node kali-worker2 pod client-containers-be7ede06-c6c0-4569-9ba1-024814d6894a container test-container: 
STEP: delete the pod
Aug 26 17:47:44.906: INFO: Waiting for pod client-containers-be7ede06-c6c0-4569-9ba1-024814d6894a to disappear
Aug 26 17:47:44.921: INFO: Pod client-containers-be7ede06-c6c0-4569-9ba1-024814d6894a no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:47:44.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8675" for this suite.

• [SLOW TEST:7.175 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3498,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:47:44.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Aug 26 17:47:45.030: INFO: >>> kubeConfig: /root/.kube/config
Aug 26 17:47:47.963: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:48:00.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9577" for this suite.

• [SLOW TEST:15.203 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":209,"skipped":3566,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:48:00.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 26 17:48:10.953: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 17:48:10.978: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 17:48:12.978: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 17:48:12.983: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 17:48:14.978: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 17:48:14.990: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 17:48:16.978: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 17:48:16.982: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 26 17:48:18.978: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 26 17:48:18.982: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:48:18.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-993" for this suite.

• [SLOW TEST:18.857 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3578,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:48:18.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-9073/configmap-test-93f72152-e816-49b9-849c-d74c629033f0
STEP: Creating a pod to test consume configMaps
Aug 26 17:48:19.104: INFO: Waiting up to 5m0s for pod "pod-configmaps-7dff819f-81e5-4c0d-bc50-51f5ebc56a37" in namespace "configmap-9073" to be "Succeeded or Failed"
Aug 26 17:48:19.127: INFO: Pod "pod-configmaps-7dff819f-81e5-4c0d-bc50-51f5ebc56a37": Phase="Pending", Reason="", readiness=false. Elapsed: 22.965397ms
Aug 26 17:48:21.132: INFO: Pod "pod-configmaps-7dff819f-81e5-4c0d-bc50-51f5ebc56a37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027699436s
Aug 26 17:48:23.140: INFO: Pod "pod-configmaps-7dff819f-81e5-4c0d-bc50-51f5ebc56a37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035285485s
STEP: Saw pod success
Aug 26 17:48:23.140: INFO: Pod "pod-configmaps-7dff819f-81e5-4c0d-bc50-51f5ebc56a37" satisfied condition "Succeeded or Failed"
Aug 26 17:48:23.142: INFO: Trying to get logs from node kali-worker pod pod-configmaps-7dff819f-81e5-4c0d-bc50-51f5ebc56a37 container env-test: 
STEP: delete the pod
Aug 26 17:48:23.235: INFO: Waiting for pod pod-configmaps-7dff819f-81e5-4c0d-bc50-51f5ebc56a37 to disappear
Aug 26 17:48:23.239: INFO: Pod pod-configmaps-7dff819f-81e5-4c0d-bc50-51f5ebc56a37 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:48:23.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9073" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3590,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:48:23.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:48:23.699: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-3631fa24-dda7-4283-a50b-4b9249a3c3c9" in namespace "security-context-test-5055" to be "Succeeded or Failed"
Aug 26 17:48:23.727: INFO: Pod "alpine-nnp-false-3631fa24-dda7-4283-a50b-4b9249a3c3c9": Phase="Pending", Reason="", readiness=false. Elapsed: 27.256903ms
Aug 26 17:48:25.731: INFO: Pod "alpine-nnp-false-3631fa24-dda7-4283-a50b-4b9249a3c3c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031753712s
Aug 26 17:48:27.790: INFO: Pod "alpine-nnp-false-3631fa24-dda7-4283-a50b-4b9249a3c3c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090846402s
Aug 26 17:48:29.794: INFO: Pod "alpine-nnp-false-3631fa24-dda7-4283-a50b-4b9249a3c3c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095204539s
Aug 26 17:48:29.795: INFO: Pod "alpine-nnp-false-3631fa24-dda7-4283-a50b-4b9249a3c3c9" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:48:29.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5055" for this suite.

• [SLOW TEST:6.562 seconds]
[k8s.io] Security Context
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when creating containers with AllowPrivilegeEscalation
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3609,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:48:29.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 26 17:48:35.996: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1927 PodName:pod-sharedvolume-12346f2d-f180-4054-8446-28c968653556 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 17:48:35.996: INFO: >>> kubeConfig: /root/.kube/config
I0826 17:48:36.022333       7 log.go:172] (0xc005cd3340) (0xc00257b180) Create stream
I0826 17:48:36.022361       7 log.go:172] (0xc005cd3340) (0xc00257b180) Stream added, broadcasting: 1
I0826 17:48:36.024173       7 log.go:172] (0xc005cd3340) Reply frame received for 1
I0826 17:48:36.024227       7 log.go:172] (0xc005cd3340) (0xc00257b2c0) Create stream
I0826 17:48:36.024248       7 log.go:172] (0xc005cd3340) (0xc00257b2c0) Stream added, broadcasting: 3
I0826 17:48:36.025391       7 log.go:172] (0xc005cd3340) Reply frame received for 3
I0826 17:48:36.025438       7 log.go:172] (0xc005cd3340) (0xc001e03220) Create stream
I0826 17:48:36.025452       7 log.go:172] (0xc005cd3340) (0xc001e03220) Stream added, broadcasting: 5
I0826 17:48:36.026351       7 log.go:172] (0xc005cd3340) Reply frame received for 5
I0826 17:48:36.105215       7 log.go:172] (0xc005cd3340) Data frame received for 5
I0826 17:48:36.105244       7 log.go:172] (0xc001e03220) (5) Data frame handling
I0826 17:48:36.105264       7 log.go:172] (0xc005cd3340) Data frame received for 3
I0826 17:48:36.105292       7 log.go:172] (0xc00257b2c0) (3) Data frame handling
I0826 17:48:36.105314       7 log.go:172] (0xc00257b2c0) (3) Data frame sent
I0826 17:48:36.105445       7 log.go:172] (0xc005cd3340) Data frame received for 3
I0826 17:48:36.105471       7 log.go:172] (0xc00257b2c0) (3) Data frame handling
I0826 17:48:36.106854       7 log.go:172] (0xc005cd3340) Data frame received for 1
I0826 17:48:36.106882       7 log.go:172] (0xc00257b180) (1) Data frame handling
I0826 17:48:36.106897       7 log.go:172] (0xc00257b180) (1) Data frame sent
I0826 17:48:36.106911       7 log.go:172] (0xc005cd3340) (0xc00257b180) Stream removed, broadcasting: 1
I0826 17:48:36.106938       7 log.go:172] (0xc005cd3340) Go away received
I0826 17:48:36.107074       7 log.go:172] (0xc005cd3340) (0xc00257b180) Stream removed, broadcasting: 1
I0826 17:48:36.107094       7 log.go:172] (0xc005cd3340) (0xc00257b2c0) Stream removed, broadcasting: 3
I0826 17:48:36.107103       7 log.go:172] (0xc005cd3340) (0xc001e03220) Stream removed, broadcasting: 5
Aug 26 17:48:36.107: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:48:36.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1927" for this suite.

• [SLOW TEST:6.306 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":213,"skipped":3618,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:48:36.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
Aug 26 17:48:36.224: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-836" to be "Succeeded or Failed"
Aug 26 17:48:36.242: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.313369ms
Aug 26 17:48:38.246: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021772735s
Aug 26 17:48:40.251: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026284544s
Aug 26 17:48:42.287: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.062174565s
STEP: Saw pod success
Aug 26 17:48:42.287: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Aug 26 17:48:42.290: INFO: Trying to get logs from node kali-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 26 17:48:42.431: INFO: Waiting for pod pod-host-path-test to disappear
Aug 26 17:48:42.444: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:48:42.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-836" for this suite.

• [SLOW TEST:6.337 seconds]
[sig-storage] HostPath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3628,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:48:42.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 26 17:48:42.953: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:48:43.011: INFO: Number of nodes with available pods: 0
Aug 26 17:48:43.011: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:48:44.019: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:48:44.245: INFO: Number of nodes with available pods: 0
Aug 26 17:48:44.245: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:48:45.059: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:48:45.063: INFO: Number of nodes with available pods: 0
Aug 26 17:48:45.063: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:48:46.049: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:48:46.052: INFO: Number of nodes with available pods: 0
Aug 26 17:48:46.052: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:48:47.099: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:48:47.103: INFO: Number of nodes with available pods: 0
Aug 26 17:48:47.103: INFO: Node kali-worker is running more than one daemon pod
Aug 26 17:48:48.124: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:48:48.180: INFO: Number of nodes with available pods: 2
Aug 26 17:48:48.180: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 26 17:48:48.356: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:48:48.359: INFO: Number of nodes with available pods: 1
Aug 26 17:48:48.359: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:48:49.444: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:48:49.447: INFO: Number of nodes with available pods: 1
Aug 26 17:48:49.447: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:48:50.365: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:48:50.368: INFO: Number of nodes with available pods: 1
Aug 26 17:48:50.369: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:48:51.364: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:48:51.366: INFO: Number of nodes with available pods: 1
Aug 26 17:48:51.366: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:48:52.371: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:48:52.408: INFO: Number of nodes with available pods: 1
Aug 26 17:48:52.408: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:48:53.365: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:48:53.369: INFO: Number of nodes with available pods: 1
Aug 26 17:48:53.369: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:48:54.364: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:48:54.367: INFO: Number of nodes with available pods: 1
Aug 26 17:48:54.367: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:48:55.365: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:48:55.369: INFO: Number of nodes with available pods: 1
Aug 26 17:48:55.369: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:48:56.364: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 26 17:48:56.368: INFO: Number of nodes with available pods: 2
Aug 26 17:48:56.368: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8793, will wait for the garbage collector to delete the pods
Aug 26 17:48:56.430: INFO: Deleting DaemonSet.extensions daemon-set took: 6.095872ms
Aug 26 17:48:56.830: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.29378ms
Aug 26 17:49:07.952: INFO: Number of nodes with available pods: 0
Aug 26 17:49:07.952: INFO: Number of running nodes: 0, number of available pods: 0
Aug 26 17:49:07.955: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8793/daemonsets","resourceVersion":"1117653"},"items":null}

Aug 26 17:49:07.957: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8793/pods","resourceVersion":"1117653"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:49:07.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8793" for this suite.

• [SLOW TEST:25.559 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":215,"skipped":3638,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:49:08.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:49:08.231: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Aug 26 17:49:08.241: INFO: Number of nodes with available pods: 0
Aug 26 17:49:08.241: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Aug 26 17:49:08.361: INFO: Number of nodes with available pods: 0
Aug 26 17:49:08.361: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:09.365: INFO: Number of nodes with available pods: 0
Aug 26 17:49:09.365: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:10.374: INFO: Number of nodes with available pods: 0
Aug 26 17:49:10.374: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:11.365: INFO: Number of nodes with available pods: 0
Aug 26 17:49:11.365: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:12.392: INFO: Number of nodes with available pods: 1
Aug 26 17:49:12.392: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Aug 26 17:49:12.464: INFO: Number of nodes with available pods: 1
Aug 26 17:49:12.464: INFO: Number of running nodes: 0, number of available pods: 1
Aug 26 17:49:13.466: INFO: Number of nodes with available pods: 0
Aug 26 17:49:13.467: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Aug 26 17:49:13.511: INFO: Number of nodes with available pods: 0
Aug 26 17:49:13.511: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:14.560: INFO: Number of nodes with available pods: 0
Aug 26 17:49:14.560: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:15.514: INFO: Number of nodes with available pods: 0
Aug 26 17:49:15.514: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:16.516: INFO: Number of nodes with available pods: 0
Aug 26 17:49:16.516: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:17.516: INFO: Number of nodes with available pods: 0
Aug 26 17:49:17.516: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:18.515: INFO: Number of nodes with available pods: 0
Aug 26 17:49:18.515: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:19.515: INFO: Number of nodes with available pods: 0
Aug 26 17:49:19.515: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:20.515: INFO: Number of nodes with available pods: 0
Aug 26 17:49:20.515: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:21.515: INFO: Number of nodes with available pods: 0
Aug 26 17:49:21.515: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:22.533: INFO: Number of nodes with available pods: 0
Aug 26 17:49:22.533: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:23.515: INFO: Number of nodes with available pods: 0
Aug 26 17:49:23.515: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:24.515: INFO: Number of nodes with available pods: 0
Aug 26 17:49:24.515: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:25.515: INFO: Number of nodes with available pods: 0
Aug 26 17:49:25.515: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:26.515: INFO: Number of nodes with available pods: 0
Aug 26 17:49:26.515: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:27.516: INFO: Number of nodes with available pods: 0
Aug 26 17:49:27.516: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:28.536: INFO: Number of nodes with available pods: 0
Aug 26 17:49:28.536: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:29.515: INFO: Number of nodes with available pods: 0
Aug 26 17:49:29.515: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:30.515: INFO: Number of nodes with available pods: 0
Aug 26 17:49:30.515: INFO: Node kali-worker2 is running more than one daemon pod
Aug 26 17:49:31.515: INFO: Number of nodes with available pods: 1
Aug 26 17:49:31.515: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7194, will wait for the garbage collector to delete the pods
Aug 26 17:49:31.581: INFO: Deleting DaemonSet.extensions daemon-set took: 7.195276ms
Aug 26 17:49:31.881: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.275904ms
Aug 26 17:49:36.484: INFO: Number of nodes with available pods: 0
Aug 26 17:49:36.484: INFO: Number of running nodes: 0, number of available pods: 0
Aug 26 17:49:36.487: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7194/daemonsets","resourceVersion":"1117807"},"items":null}

Aug 26 17:49:36.490: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7194/pods","resourceVersion":"1117807"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:49:36.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7194" for this suite.

• [SLOW TEST:28.581 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":216,"skipped":3649,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:49:36.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating secret secrets-1775/secret-test-e3fc997b-c5b9-4a8e-8839-4d458b4ff49c
STEP: Creating a pod to test consume secrets
Aug 26 17:49:36.863: INFO: Waiting up to 5m0s for pod "pod-configmaps-879ee353-b726-4ddd-8fa2-57f25fd7b4b6" in namespace "secrets-1775" to be "Succeeded or Failed"
Aug 26 17:49:36.885: INFO: Pod "pod-configmaps-879ee353-b726-4ddd-8fa2-57f25fd7b4b6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.740929ms
Aug 26 17:49:39.357: INFO: Pod "pod-configmaps-879ee353-b726-4ddd-8fa2-57f25fd7b4b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.494011482s
Aug 26 17:49:41.361: INFO: Pod "pod-configmaps-879ee353-b726-4ddd-8fa2-57f25fd7b4b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.498726755s
Aug 26 17:49:43.364: INFO: Pod "pod-configmaps-879ee353-b726-4ddd-8fa2-57f25fd7b4b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.50162054s
Aug 26 17:49:45.369: INFO: Pod "pod-configmaps-879ee353-b726-4ddd-8fa2-57f25fd7b4b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.506172896s
STEP: Saw pod success
Aug 26 17:49:45.369: INFO: Pod "pod-configmaps-879ee353-b726-4ddd-8fa2-57f25fd7b4b6" satisfied condition "Succeeded or Failed"
Aug 26 17:49:45.372: INFO: Trying to get logs from node kali-worker pod pod-configmaps-879ee353-b726-4ddd-8fa2-57f25fd7b4b6 container env-test: 
STEP: delete the pod
Aug 26 17:49:45.416: INFO: Waiting for pod pod-configmaps-879ee353-b726-4ddd-8fa2-57f25fd7b4b6 to disappear
Aug 26 17:49:45.422: INFO: Pod pod-configmaps-879ee353-b726-4ddd-8fa2-57f25fd7b4b6 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:49:45.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1775" for this suite.

• [SLOW TEST:8.836 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3658,"failed":0}
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:49:45.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
Aug 26 17:49:46.023: INFO: created pod pod-service-account-defaultsa
Aug 26 17:49:46.023: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 26 17:49:46.040: INFO: created pod pod-service-account-mountsa
Aug 26 17:49:46.040: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 26 17:49:46.111: INFO: created pod pod-service-account-nomountsa
Aug 26 17:49:46.111: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 26 17:49:46.187: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 26 17:49:46.187: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 26 17:49:46.243: INFO: created pod pod-service-account-mountsa-mountspec
Aug 26 17:49:46.243: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 26 17:49:46.323: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 26 17:49:46.323: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 26 17:49:46.380: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 26 17:49:46.380: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 26 17:49:46.416: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 26 17:49:46.416: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 26 17:49:46.447: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 26 17:49:46.447: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:49:46.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4368" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":275,"completed":218,"skipped":3658,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:49:46.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 17:49:47.451: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 17:49:49.461: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060988, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:49:52.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060988, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:49:53.845: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060988, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:49:55.721: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060988, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:49:57.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060988, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:49:59.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060988, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734060987, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 17:50:02.590: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:50:03.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7573" for this suite.
STEP: Destroying namespace "webhook-7573-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:22.280 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":219,"skipped":3671,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:50:08.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 26 17:50:11.754: INFO: Waiting up to 5m0s for pod "pod-37f3ded3-c8a7-4c78-b687-44aba1289c9b" in namespace "emptydir-13" to be "Succeeded or Failed"
Aug 26 17:50:11.993: INFO: Pod "pod-37f3ded3-c8a7-4c78-b687-44aba1289c9b": Phase="Pending", Reason="", readiness=false. Elapsed: 239.284686ms
Aug 26 17:50:13.999: INFO: Pod "pod-37f3ded3-c8a7-4c78-b687-44aba1289c9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.244999556s
Aug 26 17:50:16.002: INFO: Pod "pod-37f3ded3-c8a7-4c78-b687-44aba1289c9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248173502s
Aug 26 17:50:18.007: INFO: Pod "pod-37f3ded3-c8a7-4c78-b687-44aba1289c9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.25240517s
STEP: Saw pod success
Aug 26 17:50:18.007: INFO: Pod "pod-37f3ded3-c8a7-4c78-b687-44aba1289c9b" satisfied condition "Succeeded or Failed"
Aug 26 17:50:18.009: INFO: Trying to get logs from node kali-worker2 pod pod-37f3ded3-c8a7-4c78-b687-44aba1289c9b container test-container: 
STEP: delete the pod
Aug 26 17:50:18.179: INFO: Waiting for pod pod-37f3ded3-c8a7-4c78-b687-44aba1289c9b to disappear
Aug 26 17:50:18.209: INFO: Pod pod-37f3ded3-c8a7-4c78-b687-44aba1289c9b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:50:18.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-13" for this suite.

• [SLOW TEST:9.348 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3675,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:50:18.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 26 17:50:18.302: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be01f5ac-b69c-4afc-a5f3-f85dde06b71c" in namespace "projected-2559" to be "Succeeded or Failed"
Aug 26 17:50:18.317: INFO: Pod "downwardapi-volume-be01f5ac-b69c-4afc-a5f3-f85dde06b71c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.366121ms
Aug 26 17:50:20.321: INFO: Pod "downwardapi-volume-be01f5ac-b69c-4afc-a5f3-f85dde06b71c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018356374s
Aug 26 17:50:22.495: INFO: Pod "downwardapi-volume-be01f5ac-b69c-4afc-a5f3-f85dde06b71c": Phase="Running", Reason="", readiness=true. Elapsed: 4.192367486s
Aug 26 17:50:24.499: INFO: Pod "downwardapi-volume-be01f5ac-b69c-4afc-a5f3-f85dde06b71c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.196247408s
STEP: Saw pod success
Aug 26 17:50:24.499: INFO: Pod "downwardapi-volume-be01f5ac-b69c-4afc-a5f3-f85dde06b71c" satisfied condition "Succeeded or Failed"
Aug 26 17:50:24.501: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-be01f5ac-b69c-4afc-a5f3-f85dde06b71c container client-container: 
STEP: delete the pod
Aug 26 17:50:24.522: INFO: Waiting for pod downwardapi-volume-be01f5ac-b69c-4afc-a5f3-f85dde06b71c to disappear
Aug 26 17:50:24.527: INFO: Pod downwardapi-volume-be01f5ac-b69c-4afc-a5f3-f85dde06b71c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:50:24.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2559" for this suite.

• [SLOW TEST:6.299 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3688,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:50:24.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-322
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-322
STEP: Deleting pre-stop pod
Aug 26 17:50:41.789: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:50:41.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-322" for this suite.

• [SLOW TEST:17.356 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":275,"completed":222,"skipped":3708,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:50:41.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 26 17:50:42.182: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 26 17:50:42.505: INFO: Waiting for terminating namespaces to be deleted...
Aug 26 17:50:42.626: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 26 17:50:42.633: INFO: kube-proxy-hhbw6 from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 26 17:50:42.633: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 17:50:42.633: INFO: daemon-set-rsfwc from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 26 17:50:42.633: INFO: 	Container app ready: true, restart count 0
Aug 26 17:50:42.633: INFO: server from prestop-322 started at 2020-08-26 17:50:24 +0000 UTC (1 container statuses recorded)
Aug 26 17:50:42.633: INFO: 	Container server ready: true, restart count 0
Aug 26 17:50:42.633: INFO: kindnet-f7bnz from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 26 17:50:42.633: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 17:50:42.633: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 26 17:50:42.639: INFO: tester from prestop-322 started at 2020-08-26 17:50:30 +0000 UTC (1 container statuses recorded)
Aug 26 17:50:42.639: INFO: 	Container tester ready: true, restart count 0
Aug 26 17:50:42.639: INFO: daemon-set-69cql from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 26 17:50:42.639: INFO: 	Container app ready: true, restart count 0
Aug 26 17:50:42.639: INFO: kindnet-4v6sn from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 26 17:50:42.639: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 17:50:42.639: INFO: kube-proxy-m77qg from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 26 17:50:42.639: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-78478393-5bf4-4ac1-8865-f7196d05e00a 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-78478393-5bf4-4ac1-8865-f7196d05e00a off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-78478393-5bf4-4ac1-8865-f7196d05e00a
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:50:55.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-298" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:13.141 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":223,"skipped":3734,"failed":0}
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:50:55.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 26 17:50:55.866: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 26 17:50:55.921: INFO: Waiting for terminating namespaces to be deleted...
Aug 26 17:50:55.924: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 26 17:50:55.931: INFO: kindnet-f7bnz from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 26 17:50:55.931: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 17:50:55.931: INFO: kube-proxy-hhbw6 from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 26 17:50:55.931: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 17:50:55.931: INFO: daemon-set-rsfwc from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 26 17:50:55.931: INFO: 	Container app ready: true, restart count 0
Aug 26 17:50:55.931: INFO: with-labels from sched-pred-298 started at 2020-08-26 17:50:50 +0000 UTC (1 container statuses recorded)
Aug 26 17:50:55.931: INFO: 	Container with-labels ready: true, restart count 0
Aug 26 17:50:55.931: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 26 17:50:55.937: INFO: kindnet-4v6sn from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 26 17:50:55.937: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 17:50:55.937: INFO: kube-proxy-m77qg from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 26 17:50:55.937: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 17:50:55.937: INFO: daemon-set-69cql from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 26 17:50:55.937: INFO: 	Container app ready: true, restart count 0
Aug 26 17:50:55.937: INFO: tester from prestop-322 started at 2020-08-26 17:50:30 +0000 UTC (1 container statuses recorded)
Aug 26 17:50:55.937: INFO: 	Container tester ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-ea125cfa-c0dd-47b9-aa01-05bee40c15e5 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-ea125cfa-c0dd-47b9-aa01-05bee40c15e5 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-ea125cfa-c0dd-47b9-aa01-05bee40c15e5
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:51:23.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4777" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:28.598 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":224,"skipped":3739,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:51:23.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:51:23.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5167" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3752,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:51:23.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-5080a2ad-5dcf-434b-88ae-c7ddf82e9512
STEP: Creating a pod to test consume secrets
Aug 26 17:51:24.464: INFO: Waiting up to 5m0s for pod "pod-secrets-fb7110d3-5199-41f1-91af-70604239965c" in namespace "secrets-7819" to be "Succeeded or Failed"
Aug 26 17:51:24.477: INFO: Pod "pod-secrets-fb7110d3-5199-41f1-91af-70604239965c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.214658ms
Aug 26 17:51:26.671: INFO: Pod "pod-secrets-fb7110d3-5199-41f1-91af-70604239965c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206642219s
Aug 26 17:51:28.678: INFO: Pod "pod-secrets-fb7110d3-5199-41f1-91af-70604239965c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.2138441s
Aug 26 17:51:30.681: INFO: Pod "pod-secrets-fb7110d3-5199-41f1-91af-70604239965c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.21723034s
STEP: Saw pod success
Aug 26 17:51:30.681: INFO: Pod "pod-secrets-fb7110d3-5199-41f1-91af-70604239965c" satisfied condition "Succeeded or Failed"
Aug 26 17:51:30.683: INFO: Trying to get logs from node kali-worker pod pod-secrets-fb7110d3-5199-41f1-91af-70604239965c container secret-volume-test: 
STEP: delete the pod
Aug 26 17:51:31.034: INFO: Waiting for pod pod-secrets-fb7110d3-5199-41f1-91af-70604239965c to disappear
Aug 26 17:51:31.068: INFO: Pod pod-secrets-fb7110d3-5199-41f1-91af-70604239965c no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:51:31.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7819" for this suite.

• [SLOW TEST:7.242 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":3778,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:51:31.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:51:31.487: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"6448c00f-1d05-48be-83c5-799140543281", Controller:(*bool)(0xc0033d7562), BlockOwnerDeletion:(*bool)(0xc0033d7563)}}
Aug 26 17:51:31.506: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2de040fa-c06c-4bd3-a0d4-5b9af9be4d06", Controller:(*bool)(0xc005f8dd6a), BlockOwnerDeletion:(*bool)(0xc005f8dd6b)}}
Aug 26 17:51:31.657: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"24b749ab-9780-4f5e-bdb3-04098c482a7d", Controller:(*bool)(0xc0033d775a), BlockOwnerDeletion:(*bool)(0xc0033d775b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:51:36.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1572" for this suite.

• [SLOW TEST:5.649 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":227,"skipped":3799,"failed":0}
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:51:36.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 26 17:51:42.386: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:51:42.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6148" for this suite.

• [SLOW TEST:5.901 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3799,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:51:42.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 26 17:51:49.337: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8045 pod-service-account-b0fc2ab2-8534-42ee-b225-c11065bb8414 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 26 17:51:49.578: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8045 pod-service-account-b0fc2ab2-8534-42ee-b225-c11065bb8414 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 26 17:51:49.775: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8045 pod-service-account-b0fc2ab2-8534-42ee-b225-c11065bb8414 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:51:49.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8045" for this suite.

• [SLOW TEST:7.376 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":275,"completed":229,"skipped":3821,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:51:50.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 26 17:51:50.163: INFO: Waiting up to 5m0s for pod "downward-api-f1044d4e-7231-4107-8973-a99a9e962624" in namespace "downward-api-9451" to be "Succeeded or Failed"
Aug 26 17:51:50.210: INFO: Pod "downward-api-f1044d4e-7231-4107-8973-a99a9e962624": Phase="Pending", Reason="", readiness=false. Elapsed: 46.32735ms
Aug 26 17:51:52.542: INFO: Pod "downward-api-f1044d4e-7231-4107-8973-a99a9e962624": Phase="Pending", Reason="", readiness=false. Elapsed: 2.378643951s
Aug 26 17:51:54.619: INFO: Pod "downward-api-f1044d4e-7231-4107-8973-a99a9e962624": Phase="Running", Reason="", readiness=true. Elapsed: 4.456189058s
Aug 26 17:51:56.656: INFO: Pod "downward-api-f1044d4e-7231-4107-8973-a99a9e962624": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.493179555s
STEP: Saw pod success
Aug 26 17:51:56.656: INFO: Pod "downward-api-f1044d4e-7231-4107-8973-a99a9e962624" satisfied condition "Succeeded or Failed"
Aug 26 17:51:56.662: INFO: Trying to get logs from node kali-worker pod downward-api-f1044d4e-7231-4107-8973-a99a9e962624 container dapi-container: 
STEP: delete the pod
Aug 26 17:51:56.799: INFO: Waiting for pod downward-api-f1044d4e-7231-4107-8973-a99a9e962624 to disappear
Aug 26 17:51:56.847: INFO: Pod downward-api-f1044d4e-7231-4107-8973-a99a9e962624 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:51:56.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9451" for this suite.

• [SLOW TEST:6.937 seconds]
[sig-node] Downward API
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3931,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:51:56.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 26 17:51:57.441: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e977b4b-f0e0-4b5a-86be-12410b6158ed" in namespace "projected-4859" to be "Succeeded or Failed"
Aug 26 17:51:57.522: INFO: Pod "downwardapi-volume-4e977b4b-f0e0-4b5a-86be-12410b6158ed": Phase="Pending", Reason="", readiness=false. Elapsed: 81.45379ms
Aug 26 17:51:59.527: INFO: Pod "downwardapi-volume-4e977b4b-f0e0-4b5a-86be-12410b6158ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085821512s
Aug 26 17:52:01.671: INFO: Pod "downwardapi-volume-4e977b4b-f0e0-4b5a-86be-12410b6158ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229592731s
Aug 26 17:52:03.837: INFO: Pod "downwardapi-volume-4e977b4b-f0e0-4b5a-86be-12410b6158ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.396356685s
STEP: Saw pod success
Aug 26 17:52:03.837: INFO: Pod "downwardapi-volume-4e977b4b-f0e0-4b5a-86be-12410b6158ed" satisfied condition "Succeeded or Failed"
Aug 26 17:52:03.840: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-4e977b4b-f0e0-4b5a-86be-12410b6158ed container client-container: 
STEP: delete the pod
Aug 26 17:52:05.647: INFO: Waiting for pod downwardapi-volume-4e977b4b-f0e0-4b5a-86be-12410b6158ed to disappear
Aug 26 17:52:06.214: INFO: Pod downwardapi-volume-4e977b4b-f0e0-4b5a-86be-12410b6158ed no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:52:06.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4859" for this suite.

• [SLOW TEST:9.270 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":3954,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:52:06.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 26 17:52:07.116: INFO: Waiting up to 5m0s for pod "downwardapi-volume-467c9ef9-7921-44ff-b8fe-6e18fe1d76c6" in namespace "projected-268" to be "Succeeded or Failed"
Aug 26 17:52:07.357: INFO: Pod "downwardapi-volume-467c9ef9-7921-44ff-b8fe-6e18fe1d76c6": Phase="Pending", Reason="", readiness=false. Elapsed: 241.840536ms
Aug 26 17:52:09.362: INFO: Pod "downwardapi-volume-467c9ef9-7921-44ff-b8fe-6e18fe1d76c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246576935s
Aug 26 17:52:11.514: INFO: Pod "downwardapi-volume-467c9ef9-7921-44ff-b8fe-6e18fe1d76c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.398441425s
Aug 26 17:52:13.519: INFO: Pod "downwardapi-volume-467c9ef9-7921-44ff-b8fe-6e18fe1d76c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.403017856s
STEP: Saw pod success
Aug 26 17:52:13.519: INFO: Pod "downwardapi-volume-467c9ef9-7921-44ff-b8fe-6e18fe1d76c6" satisfied condition "Succeeded or Failed"
Aug 26 17:52:13.522: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-467c9ef9-7921-44ff-b8fe-6e18fe1d76c6 container client-container: 
STEP: delete the pod
Aug 26 17:52:13.580: INFO: Waiting for pod downwardapi-volume-467c9ef9-7921-44ff-b8fe-6e18fe1d76c6 to disappear
Aug 26 17:52:13.784: INFO: Pod downwardapi-volume-467c9ef9-7921-44ff-b8fe-6e18fe1d76c6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:52:13.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-268" for this suite.

• [SLOW TEST:7.667 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":3962,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:52:13.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-cad2459a-d894-41f3-be60-ef7b4c247cf1 in namespace container-probe-417
Aug 26 17:52:18.385: INFO: Started pod busybox-cad2459a-d894-41f3-be60-ef7b4c247cf1 in namespace container-probe-417
STEP: checking the pod's current state and verifying that restartCount is present
Aug 26 17:52:18.388: INFO: Initial restart count of pod busybox-cad2459a-d894-41f3-be60-ef7b4c247cf1 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:56:18.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-417" for this suite.

• [SLOW TEST:244.675 seconds]
[k8s.io] Probing container
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":3990,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:56:18.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 17:56:19.094: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 17:56:21.171: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061379, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061379, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061379, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061379, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:56:23.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061379, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061379, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061379, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061379, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 17:56:26.277: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:56:26.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2664" for this suite.
STEP: Destroying namespace "webhook-2664-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.966 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":234,"skipped":4001,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:56:27.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 26 17:56:28.094: INFO: Waiting up to 5m0s for pod "pod-1d328a77-8420-43ac-813f-f1f79afcf698" in namespace "emptydir-229" to be "Succeeded or Failed"
Aug 26 17:56:28.230: INFO: Pod "pod-1d328a77-8420-43ac-813f-f1f79afcf698": Phase="Pending", Reason="", readiness=false. Elapsed: 135.585573ms
Aug 26 17:56:30.234: INFO: Pod "pod-1d328a77-8420-43ac-813f-f1f79afcf698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139682079s
Aug 26 17:56:32.333: INFO: Pod "pod-1d328a77-8420-43ac-813f-f1f79afcf698": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.238528986s
STEP: Saw pod success
Aug 26 17:56:32.333: INFO: Pod "pod-1d328a77-8420-43ac-813f-f1f79afcf698" satisfied condition "Succeeded or Failed"
Aug 26 17:56:32.336: INFO: Trying to get logs from node kali-worker2 pod pod-1d328a77-8420-43ac-813f-f1f79afcf698 container test-container: 
STEP: delete the pod
Aug 26 17:56:32.693: INFO: Waiting for pod pod-1d328a77-8420-43ac-813f-f1f79afcf698 to disappear
Aug 26 17:56:32.704: INFO: Pod pod-1d328a77-8420-43ac-813f-f1f79afcf698 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:56:32.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-229" for this suite.

• [SLOW TEST:5.190 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":4039,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:56:32.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with configMap that has name projected-configmap-test-upd-46860a2d-a292-4df4-bae7-60cfd5ab58ad
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-46860a2d-a292-4df4-bae7-60cfd5ab58ad
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:56:39.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7827" for this suite.

• [SLOW TEST:6.425 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":4073,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:56:39.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 17:56:41.017: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 17:56:43.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061401, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061401, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061401, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061400, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 17:56:45.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061401, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061401, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061401, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061400, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 17:56:48.168: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:56:48.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7926" for this suite.
STEP: Destroying namespace "webhook-7926-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.338 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":237,"skipped":4084,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:56:48.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:56:53.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1131" for this suite.

• [SLOW TEST:5.040 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":238,"skipped":4093,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:56:53.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9889
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-9889
STEP: creating replication controller externalsvc in namespace services-9889
I0826 17:56:54.380289       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9889, replica count: 2
I0826 17:56:57.430929       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0826 17:57:00.431160       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Aug 26 17:57:00.512: INFO: Creating new exec pod
Aug 26 17:57:06.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config exec --namespace=services-9889 execpod957r5 -- /bin/sh -x -c nslookup clusterip-service'
Aug 26 17:57:09.631: INFO: stderr: "I0826 17:57:09.517410    3940 log.go:172] (0xc000bea2c0) (0xc0008277c0) Create stream\nI0826 17:57:09.517478    3940 log.go:172] (0xc000bea2c0) (0xc0008277c0) Stream added, broadcasting: 1\nI0826 17:57:09.519501    3940 log.go:172] (0xc000bea2c0) Reply frame received for 1\nI0826 17:57:09.519534    3940 log.go:172] (0xc000bea2c0) (0xc0005d8c80) Create stream\nI0826 17:57:09.519545    3940 log.go:172] (0xc000bea2c0) (0xc0005d8c80) Stream added, broadcasting: 3\nI0826 17:57:09.520576    3940 log.go:172] (0xc000bea2c0) Reply frame received for 3\nI0826 17:57:09.520626    3940 log.go:172] (0xc000bea2c0) (0xc0006b2000) Create stream\nI0826 17:57:09.520653    3940 log.go:172] (0xc000bea2c0) (0xc0006b2000) Stream added, broadcasting: 5\nI0826 17:57:09.521778    3940 log.go:172] (0xc000bea2c0) Reply frame received for 5\nI0826 17:57:09.611688    3940 log.go:172] (0xc000bea2c0) Data frame received for 5\nI0826 17:57:09.611729    3940 log.go:172] (0xc0006b2000) (5) Data frame handling\nI0826 17:57:09.611756    3940 log.go:172] (0xc0006b2000) (5) Data frame sent\n+ nslookup clusterip-service\nI0826 17:57:09.618463    3940 log.go:172] (0xc000bea2c0) Data frame received for 3\nI0826 17:57:09.618482    3940 log.go:172] (0xc0005d8c80) (3) Data frame handling\nI0826 17:57:09.618495    3940 log.go:172] (0xc0005d8c80) (3) Data frame sent\nI0826 17:57:09.619561    3940 log.go:172] (0xc000bea2c0) Data frame received for 3\nI0826 17:57:09.619585    3940 log.go:172] (0xc0005d8c80) (3) Data frame handling\nI0826 17:57:09.619602    3940 log.go:172] (0xc0005d8c80) (3) Data frame sent\nI0826 17:57:09.620159    3940 log.go:172] (0xc000bea2c0) Data frame received for 3\nI0826 17:57:09.620178    3940 log.go:172] (0xc000bea2c0) Data frame received for 5\nI0826 17:57:09.620195    3940 log.go:172] (0xc0006b2000) (5) Data frame handling\nI0826 17:57:09.620212    3940 log.go:172] (0xc0005d8c80) (3) Data frame handling\nI0826 17:57:09.621894    3940 log.go:172] (0xc000bea2c0) Data frame received for 1\nI0826 17:57:09.621908    3940 log.go:172] (0xc0008277c0) (1) Data frame handling\nI0826 17:57:09.621915    3940 log.go:172] (0xc0008277c0) (1) Data frame sent\nI0826 17:57:09.621924    3940 log.go:172] (0xc000bea2c0) (0xc0008277c0) Stream removed, broadcasting: 1\nI0826 17:57:09.622203    3940 log.go:172] (0xc000bea2c0) (0xc0008277c0) Stream removed, broadcasting: 1\nI0826 17:57:09.622217    3940 log.go:172] (0xc000bea2c0) (0xc0005d8c80) Stream removed, broadcasting: 3\nI0826 17:57:09.622337    3940 log.go:172] (0xc000bea2c0) (0xc0006b2000) Stream removed, broadcasting: 5\nI0826 17:57:09.622387    3940 log.go:172] (0xc000bea2c0) Go away received\n"
Aug 26 17:57:09.631: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9889.svc.cluster.local\tcanonical name = externalsvc.services-9889.svc.cluster.local.\nName:\texternalsvc.services-9889.svc.cluster.local\nAddress: 10.108.225.172\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-9889, will wait for the garbage collector to delete the pods
Aug 26 17:57:09.813: INFO: Deleting ReplicationController externalsvc took: 129.065133ms
Aug 26 17:57:10.113: INFO: Terminating ReplicationController externalsvc pods took: 300.226538ms
Aug 26 17:57:18.214: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:57:18.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9889" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:24.744 seconds]
[sig-network] Services
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":239,"skipped":4110,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:57:18.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:57:32.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5314" for this suite.

• [SLOW TEST:13.861 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":240,"skipped":4122,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:57:32.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:57:32.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 26 17:57:34.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6115 create -f -'
Aug 26 17:57:39.232: INFO: stderr: ""
Aug 26 17:57:39.232: INFO: stdout: "e2e-test-crd-publish-openapi-3688-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 26 17:57:39.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6115 delete e2e-test-crd-publish-openapi-3688-crds test-cr'
Aug 26 17:57:39.340: INFO: stderr: ""
Aug 26 17:57:39.340: INFO: stdout: "e2e-test-crd-publish-openapi-3688-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Aug 26 17:57:39.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6115 apply -f -'
Aug 26 17:57:40.380: INFO: stderr: ""
Aug 26 17:57:40.380: INFO: stdout: "e2e-test-crd-publish-openapi-3688-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 26 17:57:40.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6115 delete e2e-test-crd-publish-openapi-3688-crds test-cr'
Aug 26 17:57:42.837: INFO: stderr: ""
Aug 26 17:57:42.837: INFO: stdout: "e2e-test-crd-publish-openapi-3688-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 26 17:57:42.837: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3688-crds'
Aug 26 17:57:43.476: INFO: stderr: ""
Aug 26 17:57:43.477: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3688-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:57:46.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6115" for this suite.

• [SLOW TEST:14.265 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":241,"skipped":4128,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:57:46.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Aug 26 17:57:53.076: INFO: Successfully updated pod "adopt-release-5ww45"
STEP: Checking that the Job readopts the Pod
Aug 26 17:57:53.076: INFO: Waiting up to 15m0s for pod "adopt-release-5ww45" in namespace "job-6571" to be "adopted"
Aug 26 17:57:53.103: INFO: Pod "adopt-release-5ww45": Phase="Running", Reason="", readiness=true. Elapsed: 27.177483ms
Aug 26 17:57:55.108: INFO: Pod "adopt-release-5ww45": Phase="Running", Reason="", readiness=true. Elapsed: 2.031651827s
Aug 26 17:57:55.108: INFO: Pod "adopt-release-5ww45" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Aug 26 17:57:55.626: INFO: Successfully updated pod "adopt-release-5ww45"
STEP: Checking that the Job releases the Pod
Aug 26 17:57:55.626: INFO: Waiting up to 15m0s for pod "adopt-release-5ww45" in namespace "job-6571" to be "released"
Aug 26 17:57:55.699: INFO: Pod "adopt-release-5ww45": Phase="Running", Reason="", readiness=true. Elapsed: 73.636489ms
Aug 26 17:57:55.699: INFO: Pod "adopt-release-5ww45" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:57:55.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6571" for this suite.

• [SLOW TEST:9.427 seconds]
[sig-apps] Job
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":242,"skipped":4158,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:57:55.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:57:55.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Aug 26 17:57:58.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2433 create -f -'
Aug 26 17:58:04.735: INFO: stderr: ""
Aug 26 17:58:04.735: INFO: stdout: "e2e-test-crd-publish-openapi-251-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 26 17:58:04.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2433 delete e2e-test-crd-publish-openapi-251-crds test-foo'
Aug 26 17:58:04.870: INFO: stderr: ""
Aug 26 17:58:04.870: INFO: stdout: "e2e-test-crd-publish-openapi-251-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Aug 26 17:58:04.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2433 apply -f -'
Aug 26 17:58:05.170: INFO: stderr: ""
Aug 26 17:58:05.170: INFO: stdout: "e2e-test-crd-publish-openapi-251-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Aug 26 17:58:05.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2433 delete e2e-test-crd-publish-openapi-251-crds test-foo'
Aug 26 17:58:05.285: INFO: stderr: ""
Aug 26 17:58:05.285: INFO: stdout: "e2e-test-crd-publish-openapi-251-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Aug 26 17:58:05.285: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2433 create -f -'
Aug 26 17:58:05.557: INFO: rc: 1
Aug 26 17:58:05.557: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2433 apply -f -'
Aug 26 17:58:05.869: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Aug 26 17:58:05.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2433 create -f -'
Aug 26 17:58:06.486: INFO: rc: 1
Aug 26 17:58:06.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2433 apply -f -'
Aug 26 17:58:06.731: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Aug 26 17:58:06.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-251-crds'
Aug 26 17:58:07.569: INFO: stderr: ""
Aug 26 17:58:07.569: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-251-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Aug 26 17:58:07.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-251-crds.metadata'
Aug 26 17:58:07.884: INFO: stderr: ""
Aug 26 17:58:07.884: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-251-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Aug 26 17:58:07.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-251-crds.spec'
Aug 26 17:58:08.200: INFO: stderr: ""
Aug 26 17:58:08.201: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-251-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Aug 26 17:58:08.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-251-crds.spec.bars'
Aug 26 17:58:08.674: INFO: stderr: ""
Aug 26 17:58:08.674: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-251-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Aug 26 17:58:08.675: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-251-crds.spec.bars2'
Aug 26 17:58:08.967: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:58:10.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2433" for this suite.

• [SLOW TEST:15.039 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":243,"skipped":4169,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:58:10.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 26 17:58:10.942: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 26 17:58:10.963: INFO: Waiting for terminating namespaces to be deleted...
Aug 26 17:58:10.969: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 26 17:58:10.984: INFO: kindnet-f7bnz from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 26 17:58:10.985: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 17:58:10.985: INFO: kube-proxy-hhbw6 from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 26 17:58:10.985: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 17:58:10.985: INFO: daemon-set-rsfwc from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 26 17:58:10.985: INFO: 	Container app ready: true, restart count 0
Aug 26 17:58:10.985: INFO: adopt-release-5ww45 from job-6571 started at 2020-08-26 17:57:46 +0000 UTC (1 container statuses recorded)
Aug 26 17:58:10.985: INFO: 	Container c ready: true, restart count 0
Aug 26 17:58:10.985: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 26 17:58:11.003: INFO: adopt-release-www6s from job-6571 started at 2020-08-26 17:57:55 +0000 UTC (1 container statuses recorded)
Aug 26 17:58:11.003: INFO: 	Container c ready: true, restart count 0
Aug 26 17:58:11.003: INFO: daemon-set-69cql from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 26 17:58:11.003: INFO: 	Container app ready: true, restart count 0
Aug 26 17:58:11.003: INFO: adopt-release-h4pdd from job-6571 started at 2020-08-26 17:57:46 +0000 UTC (1 container statuses recorded)
Aug 26 17:58:11.003: INFO: 	Container c ready: true, restart count 0
Aug 26 17:58:11.003: INFO: kindnet-4v6sn from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 26 17:58:11.003: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 17:58:11.003: INFO: kube-proxy-m77qg from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 26 17:58:11.003: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162ee32ae0c8d48f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:58:12.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1759" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":275,"completed":244,"skipped":4193,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:58:12.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 17:58:12.117: INFO: (0) /api/v1/nodes/kali-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:58:16.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9671" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4222,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:58:16.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0826 17:58:18.252249       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 26 17:58:18.252: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:58:18.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9093" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":247,"skipped":4224,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:58:18.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 26 17:58:18.925: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2248 /api/v1/namespaces/watch-2248/configmaps/e2e-watch-test-configmap-a 7e4c0970-5d4c-46b5-af92-d48b51b767fa 1120497 0 2020-08-26 17:58:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-26 17:58:18 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 26 17:58:18.926: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2248 /api/v1/namespaces/watch-2248/configmaps/e2e-watch-test-configmap-a 7e4c0970-5d4c-46b5-af92-d48b51b767fa 1120497 0 2020-08-26 17:58:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-26 17:58:18 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 26 17:58:28.935: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2248 /api/v1/namespaces/watch-2248/configmaps/e2e-watch-test-configmap-a 7e4c0970-5d4c-46b5-af92-d48b51b767fa 1120550 0 2020-08-26 17:58:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-26 17:58:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 26 17:58:28.936: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2248 /api/v1/namespaces/watch-2248/configmaps/e2e-watch-test-configmap-a 7e4c0970-5d4c-46b5-af92-d48b51b767fa 1120550 0 2020-08-26 17:58:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-26 17:58:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 26 17:58:38.943: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2248 /api/v1/namespaces/watch-2248/configmaps/e2e-watch-test-configmap-a 7e4c0970-5d4c-46b5-af92-d48b51b767fa 1120587 0 2020-08-26 17:58:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-26 17:58:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 26 17:58:38.943: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2248 /api/v1/namespaces/watch-2248/configmaps/e2e-watch-test-configmap-a 7e4c0970-5d4c-46b5-af92-d48b51b767fa 1120587 0 2020-08-26 17:58:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-26 17:58:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 26 17:58:48.982: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2248 /api/v1/namespaces/watch-2248/configmaps/e2e-watch-test-configmap-a 7e4c0970-5d4c-46b5-af92-d48b51b767fa 1120618 0 2020-08-26 17:58:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-26 17:58:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 26 17:58:48.982: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2248 /api/v1/namespaces/watch-2248/configmaps/e2e-watch-test-configmap-a 7e4c0970-5d4c-46b5-af92-d48b51b767fa 1120618 0 2020-08-26 17:58:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-08-26 17:58:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 26 17:58:58.989: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2248 /api/v1/namespaces/watch-2248/configmaps/e2e-watch-test-configmap-b 1c9f67dd-95cc-46cb-8517-003c816e3f9d 1120651 0 2020-08-26 17:58:58 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-26 17:58:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 26 17:58:58.989: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2248 /api/v1/namespaces/watch-2248/configmaps/e2e-watch-test-configmap-b 1c9f67dd-95cc-46cb-8517-003c816e3f9d 1120651 0 2020-08-26 17:58:58 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-26 17:58:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 26 17:59:08.997: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2248 /api/v1/namespaces/watch-2248/configmaps/e2e-watch-test-configmap-b 1c9f67dd-95cc-46cb-8517-003c816e3f9d 1120681 0 2020-08-26 17:58:58 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-26 17:58:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 26 17:59:08.997: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2248 /api/v1/namespaces/watch-2248/configmaps/e2e-watch-test-configmap-b 1c9f67dd-95cc-46cb-8517-003c816e3f9d 1120681 0 2020-08-26 17:58:58 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-08-26 17:58:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:59:18.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2248" for this suite.

• [SLOW TEST:60.747 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":248,"skipped":4246,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:59:19.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 26 17:59:23.564: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:59:23.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1594" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":249,"skipped":4258,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:59:23.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating replication controller my-hostname-basic-3f78cd6a-8b2e-43f3-a428-39e732ad9909
Aug 26 17:59:23.895: INFO: Pod name my-hostname-basic-3f78cd6a-8b2e-43f3-a428-39e732ad9909: Found 0 pods out of 1
Aug 26 17:59:28.921: INFO: Pod name my-hostname-basic-3f78cd6a-8b2e-43f3-a428-39e732ad9909: Found 1 pods out of 1
Aug 26 17:59:28.921: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3f78cd6a-8b2e-43f3-a428-39e732ad9909" are running
Aug 26 17:59:28.937: INFO: Pod "my-hostname-basic-3f78cd6a-8b2e-43f3-a428-39e732ad9909-hf9ks" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 17:59:23 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 17:59:27 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 17:59:27 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-26 17:59:23 +0000 UTC Reason: Message:}])
Aug 26 17:59:28.937: INFO: Trying to dial the pod
Aug 26 17:59:33.949: INFO: Controller my-hostname-basic-3f78cd6a-8b2e-43f3-a428-39e732ad9909: Got expected result from replica 1 [my-hostname-basic-3f78cd6a-8b2e-43f3-a428-39e732ad9909-hf9ks]: "my-hostname-basic-3f78cd6a-8b2e-43f3-a428-39e732ad9909-hf9ks", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:59:33.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7896" for this suite.

• [SLOW TEST:10.193 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":250,"skipped":4270,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:59:33.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 26 17:59:34.121: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c5b6751c-fbdf-4a0a-849d-708400a96226" in namespace "downward-api-2981" to be "Succeeded or Failed"
Aug 26 17:59:34.124: INFO: Pod "downwardapi-volume-c5b6751c-fbdf-4a0a-849d-708400a96226": Phase="Pending", Reason="", readiness=false. Elapsed: 3.061663ms
Aug 26 17:59:36.137: INFO: Pod "downwardapi-volume-c5b6751c-fbdf-4a0a-849d-708400a96226": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015604674s
Aug 26 17:59:38.321: INFO: Pod "downwardapi-volume-c5b6751c-fbdf-4a0a-849d-708400a96226": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.199870953s
STEP: Saw pod success
Aug 26 17:59:38.321: INFO: Pod "downwardapi-volume-c5b6751c-fbdf-4a0a-849d-708400a96226" satisfied condition "Succeeded or Failed"
Aug 26 17:59:38.324: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-c5b6751c-fbdf-4a0a-849d-708400a96226 container client-container: 
STEP: delete the pod
Aug 26 17:59:38.374: INFO: Waiting for pod downwardapi-volume-c5b6751c-fbdf-4a0a-849d-708400a96226 to disappear
Aug 26 17:59:38.394: INFO: Pod downwardapi-volume-c5b6751c-fbdf-4a0a-849d-708400a96226 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:59:38.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2981" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4275,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:59:38.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 26 17:59:38.516: INFO: Waiting up to 5m0s for pod "pod-bfa0809c-ea8b-491a-bcf7-9f8c3f4f3966" in namespace "emptydir-16" to be "Succeeded or Failed"
Aug 26 17:59:38.535: INFO: Pod "pod-bfa0809c-ea8b-491a-bcf7-9f8c3f4f3966": Phase="Pending", Reason="", readiness=false. Elapsed: 19.05839ms
Aug 26 17:59:41.112: INFO: Pod "pod-bfa0809c-ea8b-491a-bcf7-9f8c3f4f3966": Phase="Pending", Reason="", readiness=false. Elapsed: 2.595945486s
Aug 26 17:59:43.245: INFO: Pod "pod-bfa0809c-ea8b-491a-bcf7-9f8c3f4f3966": Phase="Pending", Reason="", readiness=false. Elapsed: 4.729637931s
Aug 26 17:59:45.249: INFO: Pod "pod-bfa0809c-ea8b-491a-bcf7-9f8c3f4f3966": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.733764068s
STEP: Saw pod success
Aug 26 17:59:45.249: INFO: Pod "pod-bfa0809c-ea8b-491a-bcf7-9f8c3f4f3966" satisfied condition "Succeeded or Failed"
Aug 26 17:59:45.252: INFO: Trying to get logs from node kali-worker pod pod-bfa0809c-ea8b-491a-bcf7-9f8c3f4f3966 container test-container: 
STEP: delete the pod
Aug 26 17:59:45.504: INFO: Waiting for pod pod-bfa0809c-ea8b-491a-bcf7-9f8c3f4f3966 to disappear
Aug 26 17:59:45.800: INFO: Pod pod-bfa0809c-ea8b-491a-bcf7-9f8c3f4f3966 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:59:45.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-16" for this suite.

• [SLOW TEST:7.532 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4281,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:59:45.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
Aug 26 17:59:46.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config api-versions'
Aug 26 17:59:46.515: INFO: stderr: ""
Aug 26 17:59:46.515: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:59:46.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1585" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":253,"skipped":4288,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:59:46.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 26 17:59:47.600: INFO: Waiting up to 5m0s for pod "pod-172a02cd-20d6-46fb-aa08-7a06fd02e51e" in namespace "emptydir-7090" to be "Succeeded or Failed"
Aug 26 17:59:47.716: INFO: Pod "pod-172a02cd-20d6-46fb-aa08-7a06fd02e51e": Phase="Pending", Reason="", readiness=false. Elapsed: 116.865725ms
Aug 26 17:59:49.721: INFO: Pod "pod-172a02cd-20d6-46fb-aa08-7a06fd02e51e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121256927s
Aug 26 17:59:51.735: INFO: Pod "pod-172a02cd-20d6-46fb-aa08-7a06fd02e51e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.135017805s
STEP: Saw pod success
Aug 26 17:59:51.735: INFO: Pod "pod-172a02cd-20d6-46fb-aa08-7a06fd02e51e" satisfied condition "Succeeded or Failed"
Aug 26 17:59:51.892: INFO: Trying to get logs from node kali-worker2 pod pod-172a02cd-20d6-46fb-aa08-7a06fd02e51e container test-container: 
STEP: delete the pod
Aug 26 17:59:52.210: INFO: Waiting for pod pod-172a02cd-20d6-46fb-aa08-7a06fd02e51e to disappear
Aug 26 17:59:52.213: INFO: Pod pod-172a02cd-20d6-46fb-aa08-7a06fd02e51e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 17:59:52.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7090" for this suite.

• [SLOW TEST:5.697 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4302,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 17:59:52.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Aug 26 17:59:52.272: INFO: namespace kubectl-6595
Aug 26 17:59:52.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6595'
Aug 26 17:59:52.734: INFO: stderr: ""
Aug 26 17:59:52.734: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 26 17:59:53.737: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 17:59:53.738: INFO: Found 0 / 1
Aug 26 17:59:54.738: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 17:59:54.738: INFO: Found 0 / 1
Aug 26 17:59:55.738: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 17:59:55.738: INFO: Found 0 / 1
Aug 26 17:59:56.738: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 17:59:56.738: INFO: Found 0 / 1
Aug 26 17:59:57.739: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 17:59:57.739: INFO: Found 1 / 1
Aug 26 17:59:57.739: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 26 17:59:57.742: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 26 17:59:57.742: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 26 17:59:57.742: INFO: wait on agnhost-master startup in kubectl-6595 
Aug 26 17:59:57.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config logs agnhost-master-f4n94 agnhost-master --namespace=kubectl-6595'
Aug 26 17:59:57.855: INFO: stderr: ""
Aug 26 17:59:57.855: INFO: stdout: "Paused\n"
STEP: exposing RC
Aug 26 17:59:57.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6595'
Aug 26 17:59:58.002: INFO: stderr: ""
Aug 26 17:59:58.002: INFO: stdout: "service/rm2 exposed\n"
Aug 26 17:59:58.057: INFO: Service rm2 in namespace kubectl-6595 found.
STEP: exposing service
Aug 26 18:00:00.064: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6595'
Aug 26 18:00:00.299: INFO: stderr: ""
Aug 26 18:00:00.299: INFO: stdout: "service/rm3 exposed\n"
Aug 26 18:00:00.342: INFO: Service rm3 in namespace kubectl-6595 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:00:02.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6595" for this suite.

• [SLOW TEST:10.137 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
    should create services for rc  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":275,"completed":255,"skipped":4308,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:00:02.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 18:00:02.493: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:00:09.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3977" for this suite.

• [SLOW TEST:7.301 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":275,"completed":256,"skipped":4318,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:00:09.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-45a60b77-283b-4afb-a818-163b78508632
STEP: Creating a pod to test consume secrets
Aug 26 18:00:09.830: INFO: Waiting up to 5m0s for pod "pod-secrets-539d0e77-cece-444b-89f3-fb4202a6d050" in namespace "secrets-3758" to be "Succeeded or Failed"
Aug 26 18:00:09.970: INFO: Pod "pod-secrets-539d0e77-cece-444b-89f3-fb4202a6d050": Phase="Pending", Reason="", readiness=false. Elapsed: 140.406809ms
Aug 26 18:00:11.973: INFO: Pod "pod-secrets-539d0e77-cece-444b-89f3-fb4202a6d050": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143791338s
Aug 26 18:00:14.431: INFO: Pod "pod-secrets-539d0e77-cece-444b-89f3-fb4202a6d050": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.601235462s
STEP: Saw pod success
Aug 26 18:00:14.431: INFO: Pod "pod-secrets-539d0e77-cece-444b-89f3-fb4202a6d050" satisfied condition "Succeeded or Failed"
Aug 26 18:00:14.840: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-539d0e77-cece-444b-89f3-fb4202a6d050 container secret-volume-test: 
STEP: delete the pod
Aug 26 18:00:15.492: INFO: Waiting for pod pod-secrets-539d0e77-cece-444b-89f3-fb4202a6d050 to disappear
Aug 26 18:00:15.896: INFO: Pod pod-secrets-539d0e77-cece-444b-89f3-fb4202a6d050 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:00:15.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3758" for this suite.

• [SLOW TEST:6.376 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4342,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:00:16.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-dccaec96-21c4-466f-afb2-f1b44b00ccf3
STEP: Creating a pod to test consume configMaps
Aug 26 18:00:16.267: INFO: Waiting up to 5m0s for pod "pod-configmaps-d1ff1c29-2117-406f-b5af-294f7aeb6be8" in namespace "configmap-2264" to be "Succeeded or Failed"
Aug 26 18:00:16.286: INFO: Pod "pod-configmaps-d1ff1c29-2117-406f-b5af-294f7aeb6be8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.980531ms
Aug 26 18:00:18.290: INFO: Pod "pod-configmaps-d1ff1c29-2117-406f-b5af-294f7aeb6be8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023011234s
Aug 26 18:00:20.294: INFO: Pod "pod-configmaps-d1ff1c29-2117-406f-b5af-294f7aeb6be8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027502883s
STEP: Saw pod success
Aug 26 18:00:20.294: INFO: Pod "pod-configmaps-d1ff1c29-2117-406f-b5af-294f7aeb6be8" satisfied condition "Succeeded or Failed"
Aug 26 18:00:20.297: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-d1ff1c29-2117-406f-b5af-294f7aeb6be8 container configmap-volume-test: 
STEP: delete the pod
Aug 26 18:00:20.351: INFO: Waiting for pod pod-configmaps-d1ff1c29-2117-406f-b5af-294f7aeb6be8 to disappear
Aug 26 18:00:20.448: INFO: Pod pod-configmaps-d1ff1c29-2117-406f-b5af-294f7aeb6be8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:00:20.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2264" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4361,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:00:20.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 26 18:00:24.802: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:00:24.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7029" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4372,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:00:24.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 18:00:25.408: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 18:00:27.574: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061625, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061625, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061625, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061625, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 18:00:29.578: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061625, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061625, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061625, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061625, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 18:00:32.628: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:00:34.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3848" for this suite.
STEP: Destroying namespace "webhook-3848-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.975 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":260,"skipped":4381,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:00:34.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-140
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 26 18:00:35.342: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 26 18:00:36.005: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 18:00:38.009: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 18:00:40.012: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 26 18:00:42.009: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 18:00:44.034: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 18:00:46.008: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 18:00:48.008: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 18:00:50.009: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 26 18:00:52.008: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 26 18:00:52.013: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 26 18:00:58.106: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.82 8081 | grep -v '^\s*$'] Namespace:pod-network-test-140 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 18:00:58.106: INFO: >>> kubeConfig: /root/.kube/config
I0826 18:00:58.134808       7 log.go:172] (0xc0030d5ad0) (0xc002fe06e0) Create stream
I0826 18:00:58.134834       7 log.go:172] (0xc0030d5ad0) (0xc002fe06e0) Stream added, broadcasting: 1
I0826 18:00:58.137105       7 log.go:172] (0xc0030d5ad0) Reply frame received for 1
I0826 18:00:58.137152       7 log.go:172] (0xc0030d5ad0) (0xc002bd0d20) Create stream
I0826 18:00:58.137164       7 log.go:172] (0xc0030d5ad0) (0xc002bd0d20) Stream added, broadcasting: 3
I0826 18:00:58.138081       7 log.go:172] (0xc0030d5ad0) Reply frame received for 3
I0826 18:00:58.138120       7 log.go:172] (0xc0030d5ad0) (0xc002fe0a00) Create stream
I0826 18:00:58.138138       7 log.go:172] (0xc0030d5ad0) (0xc002fe0a00) Stream added, broadcasting: 5
I0826 18:00:58.138928       7 log.go:172] (0xc0030d5ad0) Reply frame received for 5
I0826 18:00:59.209162       7 log.go:172] (0xc0030d5ad0) Data frame received for 3
I0826 18:00:59.209276       7 log.go:172] (0xc002bd0d20) (3) Data frame handling
I0826 18:00:59.209312       7 log.go:172] (0xc002bd0d20) (3) Data frame sent
I0826 18:00:59.209343       7 log.go:172] (0xc0030d5ad0) Data frame received for 3
I0826 18:00:59.209362       7 log.go:172] (0xc002bd0d20) (3) Data frame handling
I0826 18:00:59.209385       7 log.go:172] (0xc0030d5ad0) Data frame received for 5
I0826 18:00:59.209403       7 log.go:172] (0xc002fe0a00) (5) Data frame handling
I0826 18:00:59.211832       7 log.go:172] (0xc0030d5ad0) Data frame received for 1
I0826 18:00:59.211863       7 log.go:172] (0xc002fe06e0) (1) Data frame handling
I0826 18:00:59.211884       7 log.go:172] (0xc002fe06e0) (1) Data frame sent
I0826 18:00:59.211910       7 log.go:172] (0xc0030d5ad0) (0xc002fe06e0) Stream removed, broadcasting: 1
I0826 18:00:59.211941       7 log.go:172] (0xc0030d5ad0) Go away received
I0826 18:00:59.212078       7 log.go:172] (0xc0030d5ad0) (0xc002fe06e0) Stream removed, broadcasting: 1
I0826 18:00:59.212116       7 log.go:172] (0xc0030d5ad0) (0xc002bd0d20) Stream removed, broadcasting: 3
I0826 18:00:59.212141       7 log.go:172] (0xc0030d5ad0) (0xc002fe0a00) Stream removed, broadcasting: 5
Aug 26 18:00:59.212: INFO: Found all expected endpoints: [netserver-0]
Aug 26 18:00:59.216: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.163 8081 | grep -v '^\s*$'] Namespace:pod-network-test-140 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 26 18:00:59.216: INFO: >>> kubeConfig: /root/.kube/config
I0826 18:00:59.244451       7 log.go:172] (0xc00155a160) (0xc002fe1040) Create stream
I0826 18:00:59.244480       7 log.go:172] (0xc00155a160) (0xc002fe1040) Stream added, broadcasting: 1
I0826 18:00:59.246509       7 log.go:172] (0xc00155a160) Reply frame received for 1
I0826 18:00:59.246548       7 log.go:172] (0xc00155a160) (0xc001f94c80) Create stream
I0826 18:00:59.246562       7 log.go:172] (0xc00155a160) (0xc001f94c80) Stream added, broadcasting: 3
I0826 18:00:59.247560       7 log.go:172] (0xc00155a160) Reply frame received for 3
I0826 18:00:59.247599       7 log.go:172] (0xc00155a160) (0xc001f94e60) Create stream
I0826 18:00:59.247616       7 log.go:172] (0xc00155a160) (0xc001f94e60) Stream added, broadcasting: 5
I0826 18:00:59.248463       7 log.go:172] (0xc00155a160) Reply frame received for 5
I0826 18:01:00.326037       7 log.go:172] (0xc00155a160) Data frame received for 3
I0826 18:01:00.326093       7 log.go:172] (0xc001f94c80) (3) Data frame handling
I0826 18:01:00.326125       7 log.go:172] (0xc001f94c80) (3) Data frame sent
I0826 18:01:00.326144       7 log.go:172] (0xc00155a160) Data frame received for 3
I0826 18:01:00.326159       7 log.go:172] (0xc001f94c80) (3) Data frame handling
I0826 18:01:00.326242       7 log.go:172] (0xc00155a160) Data frame received for 5
I0826 18:01:00.326256       7 log.go:172] (0xc001f94e60) (5) Data frame handling
I0826 18:01:00.327941       7 log.go:172] (0xc00155a160) Data frame received for 1
I0826 18:01:00.327973       7 log.go:172] (0xc002fe1040) (1) Data frame handling
I0826 18:01:00.327996       7 log.go:172] (0xc002fe1040) (1) Data frame sent
I0826 18:01:00.328018       7 log.go:172] (0xc00155a160) (0xc002fe1040) Stream removed, broadcasting: 1
I0826 18:01:00.328042       7 log.go:172] (0xc00155a160) Go away received
I0826 18:01:00.328238       7 log.go:172] (0xc00155a160) (0xc002fe1040) Stream removed, broadcasting: 1
I0826 18:01:00.328320       7 log.go:172] (0xc00155a160) (0xc001f94c80) Stream removed, broadcasting: 3
I0826 18:01:00.328346       7 log.go:172] (0xc00155a160) (0xc001f94e60) Stream removed, broadcasting: 5
Aug 26 18:01:00.328: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:01:00.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-140" for this suite.

• [SLOW TEST:25.534 seconds]
[sig-network] Networking
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4429,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:01:00.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-273e5ed3-a67f-4b7b-9d30-73a660dfcd18
STEP: Creating a pod to test consume configMaps
Aug 26 18:01:00.478: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8e123941-555a-46ab-b1db-d19c00f54e19" in namespace "projected-5821" to be "Succeeded or Failed"
Aug 26 18:01:00.562: INFO: Pod "pod-projected-configmaps-8e123941-555a-46ab-b1db-d19c00f54e19": Phase="Pending", Reason="", readiness=false. Elapsed: 84.111515ms
Aug 26 18:01:02.587: INFO: Pod "pod-projected-configmaps-8e123941-555a-46ab-b1db-d19c00f54e19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109034624s
Aug 26 18:01:04.591: INFO: Pod "pod-projected-configmaps-8e123941-555a-46ab-b1db-d19c00f54e19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112289676s
Aug 26 18:01:07.037: INFO: Pod "pod-projected-configmaps-8e123941-555a-46ab-b1db-d19c00f54e19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.558632765s
STEP: Saw pod success
Aug 26 18:01:07.037: INFO: Pod "pod-projected-configmaps-8e123941-555a-46ab-b1db-d19c00f54e19" satisfied condition "Succeeded or Failed"
Aug 26 18:01:07.083: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-8e123941-555a-46ab-b1db-d19c00f54e19 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 18:01:07.567: INFO: Waiting for pod pod-projected-configmaps-8e123941-555a-46ab-b1db-d19c00f54e19 to disappear
Aug 26 18:01:07.687: INFO: Pod pod-projected-configmaps-8e123941-555a-46ab-b1db-d19c00f54e19 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:01:07.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5821" for this suite.

• [SLOW TEST:7.575 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4434,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:01:07.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:01:25.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2937" for this suite.

• [SLOW TEST:17.237 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":263,"skipped":4493,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:01:25.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-15824822-f0f9-4e42-acd5-9b1e82e8050b
STEP: Creating secret with name s-test-opt-upd-81f5f59a-9d13-4750-9b67-40ffc83d6fd1
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-15824822-f0f9-4e42-acd5-9b1e82e8050b
STEP: Updating secret s-test-opt-upd-81f5f59a-9d13-4750-9b67-40ffc83d6fd1
STEP: Creating secret with name s-test-opt-create-1b3b8326-29a3-4de8-afa8-506d8827cf24
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:01:35.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4950" for this suite.

• [SLOW TEST:10.703 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4507,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:01:35.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl run pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 26 18:01:35.916: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-106'
Aug 26 18:01:36.020: INFO: stderr: ""
Aug 26 18:01:36.020: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423
Aug 26 18:01:36.109: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:44383 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-106'
Aug 26 18:01:47.688: INFO: stderr: ""
Aug 26 18:01:47.688: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:01:47.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-106" for this suite.

• [SLOW TEST:11.843 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":275,"completed":265,"skipped":4521,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:01:47.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-29b5eab7-8fa7-4876-828c-c5da6bebf3f0
STEP: Creating a pod to test consume configMaps
Aug 26 18:01:47.858: INFO: Waiting up to 5m0s for pod "pod-configmaps-16a267d0-733d-4d49-80df-f06a3737f2cd" in namespace "configmap-6303" to be "Succeeded or Failed"
Aug 26 18:01:47.903: INFO: Pod "pod-configmaps-16a267d0-733d-4d49-80df-f06a3737f2cd": Phase="Pending", Reason="", readiness=false. Elapsed: 45.691239ms
Aug 26 18:01:49.972: INFO: Pod "pod-configmaps-16a267d0-733d-4d49-80df-f06a3737f2cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114022751s
Aug 26 18:01:51.976: INFO: Pod "pod-configmaps-16a267d0-733d-4d49-80df-f06a3737f2cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117937789s
Aug 26 18:01:53.980: INFO: Pod "pod-configmaps-16a267d0-733d-4d49-80df-f06a3737f2cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.121731298s
STEP: Saw pod success
Aug 26 18:01:53.980: INFO: Pod "pod-configmaps-16a267d0-733d-4d49-80df-f06a3737f2cd" satisfied condition "Succeeded or Failed"
Aug 26 18:01:53.983: INFO: Trying to get logs from node kali-worker pod pod-configmaps-16a267d0-733d-4d49-80df-f06a3737f2cd container configmap-volume-test: 
STEP: delete the pod
Aug 26 18:01:54.056: INFO: Waiting for pod pod-configmaps-16a267d0-733d-4d49-80df-f06a3737f2cd to disappear
Aug 26 18:01:54.071: INFO: Pod pod-configmaps-16a267d0-733d-4d49-80df-f06a3737f2cd no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:01:54.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6303" for this suite.

• [SLOW TEST:6.384 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4524,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:01:54.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Aug 26 18:01:54.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:02:11.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-662" for this suite.

• [SLOW TEST:17.080 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":267,"skipped":4533,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:02:11.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 18:02:11.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:02:15.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8435" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4541,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:02:15.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-6d787564-6721-4570-9ba1-ef824ea441dd
STEP: Creating a pod to test consume configMaps
Aug 26 18:02:15.934: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7a9a0e33-1952-4f57-a202-2b4809bd148f" in namespace "projected-5126" to be "Succeeded or Failed"
Aug 26 18:02:16.007: INFO: Pod "pod-projected-configmaps-7a9a0e33-1952-4f57-a202-2b4809bd148f": Phase="Pending", Reason="", readiness=false. Elapsed: 73.200498ms
Aug 26 18:02:18.012: INFO: Pod "pod-projected-configmaps-7a9a0e33-1952-4f57-a202-2b4809bd148f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077373225s
Aug 26 18:02:20.016: INFO: Pod "pod-projected-configmaps-7a9a0e33-1952-4f57-a202-2b4809bd148f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081648564s
Aug 26 18:02:22.019: INFO: Pod "pod-projected-configmaps-7a9a0e33-1952-4f57-a202-2b4809bd148f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.084971333s
STEP: Saw pod success
Aug 26 18:02:22.019: INFO: Pod "pod-projected-configmaps-7a9a0e33-1952-4f57-a202-2b4809bd148f" satisfied condition "Succeeded or Failed"
Aug 26 18:02:22.021: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-7a9a0e33-1952-4f57-a202-2b4809bd148f container projected-configmap-volume-test: 
STEP: delete the pod
Aug 26 18:02:22.078: INFO: Waiting for pod pod-projected-configmaps-7a9a0e33-1952-4f57-a202-2b4809bd148f to disappear
Aug 26 18:02:22.096: INFO: Pod pod-projected-configmaps-7a9a0e33-1952-4f57-a202-2b4809bd148f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:02:22.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5126" for this suite.

• [SLOW TEST:6.643 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4595,"failed":0}
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:02:22.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-secret-jg4m
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 18:02:22.252: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-jg4m" in namespace "subpath-7376" to be "Succeeded or Failed"
Aug 26 18:02:22.299: INFO: Pod "pod-subpath-test-secret-jg4m": Phase="Pending", Reason="", readiness=false. Elapsed: 46.710028ms
Aug 26 18:02:24.348: INFO: Pod "pod-subpath-test-secret-jg4m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095685276s
Aug 26 18:02:26.351: INFO: Pod "pod-subpath-test-secret-jg4m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099163609s
Aug 26 18:02:28.356: INFO: Pod "pod-subpath-test-secret-jg4m": Phase="Running", Reason="", readiness=true. Elapsed: 6.10342309s
Aug 26 18:02:30.360: INFO: Pod "pod-subpath-test-secret-jg4m": Phase="Running", Reason="", readiness=true. Elapsed: 8.107487011s
Aug 26 18:02:32.364: INFO: Pod "pod-subpath-test-secret-jg4m": Phase="Running", Reason="", readiness=true. Elapsed: 10.111436881s
Aug 26 18:02:34.368: INFO: Pod "pod-subpath-test-secret-jg4m": Phase="Running", Reason="", readiness=true. Elapsed: 12.1155613s
Aug 26 18:02:36.372: INFO: Pod "pod-subpath-test-secret-jg4m": Phase="Running", Reason="", readiness=true. Elapsed: 14.119985229s
Aug 26 18:02:38.376: INFO: Pod "pod-subpath-test-secret-jg4m": Phase="Running", Reason="", readiness=true. Elapsed: 16.124008681s
Aug 26 18:02:40.381: INFO: Pod "pod-subpath-test-secret-jg4m": Phase="Running", Reason="", readiness=true. Elapsed: 18.128538869s
Aug 26 18:02:42.385: INFO: Pod "pod-subpath-test-secret-jg4m": Phase="Running", Reason="", readiness=true. Elapsed: 20.133166124s
Aug 26 18:02:44.389: INFO: Pod "pod-subpath-test-secret-jg4m": Phase="Running", Reason="", readiness=true. Elapsed: 22.136828771s
Aug 26 18:02:46.393: INFO: Pod "pod-subpath-test-secret-jg4m": Phase="Running", Reason="", readiness=true. Elapsed: 24.141173404s
Aug 26 18:02:48.398: INFO: Pod "pod-subpath-test-secret-jg4m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.14565613s
STEP: Saw pod success
Aug 26 18:02:48.398: INFO: Pod "pod-subpath-test-secret-jg4m" satisfied condition "Succeeded or Failed"
Aug 26 18:02:48.401: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-secret-jg4m container test-container-subpath-secret-jg4m: 
STEP: delete the pod
Aug 26 18:02:48.447: INFO: Waiting for pod pod-subpath-test-secret-jg4m to disappear
Aug 26 18:02:48.462: INFO: Pod pod-subpath-test-secret-jg4m no longer exists
STEP: Deleting pod pod-subpath-test-secret-jg4m
Aug 26 18:02:48.462: INFO: Deleting pod "pod-subpath-test-secret-jg4m" in namespace "subpath-7376"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:02:48.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7376" for this suite.

• [SLOW TEST:26.343 seconds]
[sig-storage] Subpath
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":270,"skipped":4598,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:02:48.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 26 18:02:49.255: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 26 18:02:51.266: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061769, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061769, loc:(*time.Location)(0x7b565c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061769, loc:(*time.Location)(0x7b565c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734061769, loc:(*time.Location)(0x7b565c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 18:02:54.317: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 26 18:02:54.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3234-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:02:55.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6830" for this suite.
STEP: Destroying namespace "webhook-6830-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.328 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":271,"skipped":4598,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:02:55.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-b03162a0-8e62-4b30-9894-d422601dfb21
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:02:55.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-388" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":272,"skipped":4622,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:02:55.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 26 18:02:55.975: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 26 18:02:55.992: INFO: Waiting for terminating namespaces to be deleted...
Aug 26 18:02:55.995: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 26 18:02:56.000: INFO: kindnet-f7bnz from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 26 18:02:56.000: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 18:02:56.000: INFO: kube-proxy-hhbw6 from kube-system started at 2020-08-23 15:13:27 +0000 UTC (1 container statuses recorded)
Aug 26 18:02:56.000: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 18:02:56.000: INFO: daemon-set-rsfwc from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 26 18:02:56.000: INFO: 	Container app ready: true, restart count 0
Aug 26 18:02:56.000: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 26 18:02:56.007: INFO: pod-exec-websocket-8491eb7e-a1d1-4b5b-b44d-96511a84b21e from pods-8435 started at 2020-08-26 18:02:11 +0000 UTC (1 container statuses recorded)
Aug 26 18:02:56.007: INFO: 	Container main ready: false, restart count 0
Aug 26 18:02:56.007: INFO: kindnet-4v6sn from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 26 18:02:56.007: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 26 18:02:56.007: INFO: kube-proxy-m77qg from kube-system started at 2020-08-23 15:13:26 +0000 UTC (1 container statuses recorded)
Aug 26 18:02:56.007: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 26 18:02:56.007: INFO: daemon-set-69cql from daemonsets-7574 started at 2020-08-25 02:17:45 +0000 UTC (1 container statuses recorded)
Aug 26 18:02:56.007: INFO: 	Container app ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-292c0934-cd10-405d-bd8d-a1c1cec97c46 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-292c0934-cd10-405d-bd8d-a1c1cec97c46 off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-292c0934-cd10-405d-bd8d-a1c1cec97c46
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:08:08.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7736" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:312.352 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":273,"skipped":4638,"failed":0}
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:08:08.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-1520b557-f2dd-4824-91fc-0b6c87877357
STEP: Creating a pod to test consume secrets
Aug 26 18:08:08.429: INFO: Waiting up to 5m0s for pod "pod-secrets-5aae29ad-c697-48f6-b0ed-6a740a4dc9b8" in namespace "secrets-2293" to be "Succeeded or Failed"
Aug 26 18:08:08.433: INFO: Pod "pod-secrets-5aae29ad-c697-48f6-b0ed-6a740a4dc9b8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.302378ms
Aug 26 18:08:10.584: INFO: Pod "pod-secrets-5aae29ad-c697-48f6-b0ed-6a740a4dc9b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154818016s
Aug 26 18:08:12.631: INFO: Pod "pod-secrets-5aae29ad-c697-48f6-b0ed-6a740a4dc9b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201231582s
Aug 26 18:08:14.751: INFO: Pod "pod-secrets-5aae29ad-c697-48f6-b0ed-6a740a4dc9b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.321917046s
STEP: Saw pod success
Aug 26 18:08:14.751: INFO: Pod "pod-secrets-5aae29ad-c697-48f6-b0ed-6a740a4dc9b8" satisfied condition "Succeeded or Failed"
Aug 26 18:08:14.754: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-5aae29ad-c697-48f6-b0ed-6a740a4dc9b8 container secret-volume-test: 
STEP: delete the pod
Aug 26 18:08:15.246: INFO: Waiting for pod pod-secrets-5aae29ad-c697-48f6-b0ed-6a740a4dc9b8 to disappear
Aug 26 18:08:15.464: INFO: Pod pod-secrets-5aae29ad-c697-48f6-b0ed-6a740a4dc9b8 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:08:15.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2293" for this suite.

• [SLOW TEST:7.497 seconds]
[sig-storage] Secrets
/workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4638,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 26 18:08:15.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.18.8-rc.1-3+e2dc4848ea15e7/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 26 18:08:20.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4353" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":275,"skipped":4689,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 26 18:08:20.091: INFO: Running AfterSuite actions on all nodes
Aug 26 18:08:20.091: INFO: Running AfterSuite actions on node 1
Aug 26 18:08:20.091: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 6820.686 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS