Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1621940448 - Will randomize all specs
Will run 5771 specs
Running in parallel across 10 nodes
May 25 11:00:50.402: INFO: >>> kubeConfig: /root/.kube/config
May 25 11:00:50.406: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
May 25 11:00:50.430: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
May 25 11:00:50.476: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
May 25 11:00:50.476: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready.
May 25 11:00:50.476: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
May 25 11:00:50.485: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed)
May 25 11:00:50.485: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed)
May 25 11:00:50.485: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed)
May 25 11:00:50.485: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed)
May 25 11:00:50.485: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed)
May 25 11:00:50.485: INFO: e2e test version: v1.21.1
May 25 11:00:50.486: INFO: kube-apiserver version: v1.21.1
May 25 11:00:50.487: INFO: >>> kubeConfig: /root/.kube/config
May 25 11:00:50.491: INFO: Cluster IP family: ipv4
May 25 11:00:50.500: INFO: >>> kubeConfig: /root/.kube/config
May 25 11:00:50.523: INFO: Cluster IP family: ipv4
May 25 11:00:50.501: INFO: >>> kubeConfig: /root/.kube/config
May 25 11:00:50.524: INFO: Cluster IP family: ipv4
May 25 11:00:50.508: INFO: >>> kubeConfig: /root/.kube/config
May 25 11:00:50.530: INFO: Cluster IP family: ipv4
May 25 11:00:50.511: INFO: >>> kubeConfig: /root/.kube/config
May 25 11:00:50.534: INFO: Cluster IP family: ipv4
May 25 11:00:50.519: INFO: >>> kubeConfig: /root/.kube/config
May 25 11:00:50.538: INFO: Cluster IP family: ipv4
May 25 11:00:50.524: INFO: >>> kubeConfig: /root/.kube/config
May 25 11:00:50.543: INFO: Cluster IP family: ipv4
May 25 11:00:50.574: INFO: >>> kubeConfig: /root/.kube/config
May 25 11:00:50.594: INFO: Cluster IP family: ipv4
May 25 11:00:50.614: INFO: >>> kubeConfig: /root/.kube/config
May 25 11:00:50.632: INFO: Cluster IP family: ipv4
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
May 25 11:00:50.648: INFO: >>> kubeConfig: /root/.kube/config
May 25 11:00:50.665: INFO: Cluster IP family: ipv4
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:00:50.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
W0525 11:00:50.820217 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May 25 11:00:50.820: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May 25 11:00:50.823: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should create a quota without scopes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759
STEP: calling kubectl quota
May 25 11:00:50.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6211 create quota million --hard=pods=1000000,services=1000000'
May 25 11:00:50.935: INFO: stderr: ""
May 25 11:00:50.935: INFO: stdout: "resourcequota/million created\n"
STEP: verifying that the quota was created
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:00:50.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6211" for this suite.
•SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":1,"skipped":175,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:00:51.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
W0525 11:00:51.078760 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May 25 11:00:51.078: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May 25 11:00:51.088: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should reject quota with invalid scopes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1814
STEP: calling kubectl quota
May 25 11:00:51.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-1396 create quota scopes --hard=hard=pods=1000000 --scopes=Foo'
May 25 11:00:51.178: INFO: rc: 1
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:00:51.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1396" for this suite.
•S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":1,"skipped":371,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:00:50.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
W0525 11:00:50.693658 19 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May 25 11:00:50.693: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May 25 11:00:50.696: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support forwarding over websockets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:490
May 25 11:00:50.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating the pod
May 25 11:00:50.707: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true)
May 25 11:00:52.711: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true)
May 25 11:00:54.711: INFO: The status of Pod pfpod is Running (Ready = false)
May 25 11:00:56.882: INFO: The status of Pod pfpod is Running (Ready = false)
May 25 11:00:58.712: INFO: The status of Pod pfpod is Running (Ready = true)
STEP: Sending the expected data to the local port
STEP: Reading data from the local port
STEP: Verifying logs
[AfterEach] [sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:00:58.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "port-forwarding-4860" for this suite.
• [SLOW TEST:8.096 seconds]
[sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
With a server listening on localhost
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
should support forwarding over websockets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:490
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":1,"skipped":73,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:00:51.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
W0525 11:00:51.432255 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May 25 11:00:51.432: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May 25 11:00:51.436: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support a client that connects, sends NO DATA, and disconnects
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
STEP: Creating the target pod
May 25 11:00:51.449: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true)
May 25 11:00:53.454: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true)
May 25 11:00:55.455: INFO: The status of Pod pfpod is Running (Ready = false)
May 25 11:00:57.780: INFO: The status of Pod pfpod is Running (Ready = false)
May 25 11:00:59.453: INFO: The status of Pod pfpod is Running (Ready = true)
STEP: Running 'kubectl port-forward'
May 25 11:00:59.454: INFO: starting port-forward command and streaming output
May 25 11:00:59.454: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=port-forwarding-9245 port-forward --namespace=port-forwarding-9245 pfpod :80'
May 25 11:00:59.454: INFO: reading from `kubectl port-forward` command's stdout
STEP: Dialing the local port
STEP: Closing the connection to the local port
STEP: Waiting for the target pod to stop running
May 25 11:00:59.743: INFO: Waiting up to 5m0s for pod "pfpod" in namespace "port-forwarding-9245" to be "container terminated"
May 25 11:00:59.747: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=true. Elapsed: 3.990204ms
May 25 11:01:01.751: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=false. Elapsed: 2.008385604s
May 25 11:01:01.751: INFO: Pod "pfpod" satisfied condition "container terminated"
STEP: Verifying logs
[AfterEach] [sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:01.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "port-forwarding-9245" for this suite.
• [SLOW TEST:10.481 seconds]
[sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
With a server listening on 0.0.0.0
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
that expects a client request
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
should support a client that connects, sends NO DATA, and disconnects
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":1,"skipped":520,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:00:50.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
W0525 11:00:50.636494 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May 25 11:00:50.636: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May 25 11:00:50.639: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[BeforeEach] Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:378
STEP: creating the pod from
May 25 11:00:50.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-4243 create -f -'
May 25 11:00:51.068: INFO: stderr: ""
May 25 11:00:51.069: INFO: stdout: "pod/httpd created\n"
May 25 11:00:51.069: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
May 25 11:00:51.069: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-4243" to be "running and ready"
May 25 11:00:51.071: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.519027ms
May 25 11:00:53.075: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006285536s
May 25 11:00:55.080: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.01111671s
May 25 11:00:57.187: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.118642781s
May 25 11:00:59.191: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.122421853s
May 25 11:01:01.196: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 10.127322896s
May 25 11:01:01.196: INFO: Pod "httpd" satisfied condition "running and ready"
May 25 11:01:01.196: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should support exec using resource/name
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:428
STEP: executing a command in the container
May 25 11:01:01.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-4243 exec pod/httpd echo running in container'
May 25 11:01:01.419: INFO: stderr: "kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.\n"
May 25 11:01:01.475: INFO: stdout: "running in container\n"
[AfterEach] Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:384
STEP: using delete to clean up resources
May 25 11:01:01.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-4243 delete --grace-period=0 --force -f -'
May 25 11:01:01.701: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 25 11:01:01.701: INFO: stdout: "pod \"httpd\" force deleted\n"
May 25 11:01:01.701: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-4243 get rc,svc -l name=httpd --no-headers'
May 25 11:01:01.985: INFO: stderr: "No resources found in kubectl-4243 namespace.\n"
May 25 11:01:01.985: INFO: stdout: ""
May 25 11:01:01.985: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-4243 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 25 11:01:02.296: INFO: stderr: ""
May 25 11:01:02.296: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:02.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4243" for this suite.
• [SLOW TEST:11.689 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
should support exec using resource/name
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:428
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":1,"skipped":46,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:00:50.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
W0525 11:00:50.949337 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May 25 11:00:50.949: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May 25 11:00:50.952: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[BeforeEach] Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:378
STEP: creating the pod from
May 25 11:00:50.955: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3654 create -f -'
May 25 11:00:51.234: INFO: stderr: ""
May 25 11:00:51.234: INFO: stdout: "pod/httpd created\n"
May 25 11:00:51.234: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
May 25 11:00:51.234: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-3654" to be "running and ready"
May 25 11:00:51.240: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.511559ms
May 25 11:00:53.243: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.008791004s
May 25 11:00:55.247: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.012918461s
May 25 11:00:57.251: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.017436726s
May 25 11:00:59.255: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.020796294s
May 25 11:01:01.259: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 10.025007932s
May 25 11:01:01.259: INFO: Pod "httpd" satisfied condition "running and ready"
May 25 11:01:01.259: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should support port-forward
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:619
STEP: forwarding the container port to a local port
May 25 11:01:01.259: INFO: starting port-forward command and streaming output
May 25 11:01:01.259: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3654 port-forward --namespace=kubectl-3654 httpd :80'
May 25 11:01:01.260: INFO: reading from `kubectl port-forward` command's stdout
STEP: curling local port output
May 25 11:01:01.728: INFO: got:
It works!
[AfterEach] Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:384
STEP: using delete to clean up resources
May 25 11:01:01.734: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3654 delete --grace-period=0 --force -f -'
May 25 11:01:02.291: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 25 11:01:02.291: INFO: stdout: "pod \"httpd\" force deleted\n"
May 25 11:01:02.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3654 get rc,svc -l name=httpd --no-headers'
May 25 11:01:02.409: INFO: stderr: "No resources found in kubectl-3654 namespace.\n"
May 25 11:01:02.409: INFO: stdout: ""
May 25 11:01:02.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3654 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 25 11:01:02.529: INFO: stderr: ""
May 25 11:01:02.529: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:02.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3654" for this suite.
• [SLOW TEST:11.611 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
should support port-forward
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:619
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":1,"skipped":266,"failed":0}
SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:00:50.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
W0525 11:00:50.904211 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May 25 11:00:50.904: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May 25 11:00:50.907: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[BeforeEach] Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:378
STEP: creating the pod from
May 25 11:00:50.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2999 create -f -'
May 25 11:00:51.214: INFO: stderr: ""
May 25 11:00:51.214: INFO: stdout: "pod/httpd created\n"
May 25 11:00:51.214: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
May 25 11:00:51.214: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-2999" to be "running and ready"
May 25 11:00:51.217: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.394389ms
May 25 11:00:53.222: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007470101s
May 25 11:00:55.226: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.012164594s
May 25 11:00:57.231: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.016436655s
May 25 11:00:59.235: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.021013489s
May 25 11:01:01.240: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 10.025740866s
May 25 11:01:01.240: INFO: Pod "httpd" satisfied condition "running and ready"
May 25 11:01:01.240: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should support exec through an HTTP proxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:436
STEP: Starting goproxy
STEP: Running kubectl via an HTTP proxy using https_proxy
May 25 11:01:01.240: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2999 --namespace=kubectl-2999 exec httpd echo running in container'
May 25 11:01:01.572: INFO: stderr: "kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.\n"
May 25 11:01:01.572: INFO: stdout: "running in container\n"
STEP: Running kubectl via an HTTP proxy using HTTPS_PROXY
May 25 11:01:01.572: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2999 --namespace=kubectl-2999 exec httpd echo running in container'
May 25 11:01:01.814: INFO: stderr: "kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.\n"
May 25 11:01:01.814: INFO: stdout: "running in container\n"
[AfterEach] Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:384
STEP: using delete to clean up resources
May 25 11:01:01.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2999 delete --grace-period=0 --force -f -'
May 25 11:01:02.291: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 25 11:01:02.292: INFO: stdout: "pod \"httpd\" force deleted\n"
May 25 11:01:02.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2999 get rc,svc -l name=httpd --no-headers'
May 25 11:01:02.416: INFO: stderr: "No resources found in kubectl-2999 namespace.\n"
May 25 11:01:02.416: INFO: stdout: ""
May 25 11:01:02.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-2999 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 25 11:01:02.534: INFO: stderr: ""
May 25 11:01:02.534: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:02.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2999" for this suite.
• [SLOW TEST:11.677 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
should support exec through an HTTP proxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:436
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":1,"skipped":194,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:00:50.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
W0525 11:00:50.755046 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May 25 11:00:50.755: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May 25 11:00:50.759: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support a client that connects, sends DATA, and disconnects
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463
STEP: Creating the target pod
May 25 11:00:50.769: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true)
May 25 11:00:52.774: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true)
May 25 11:00:54.774: INFO: The status of Pod pfpod is Running (Ready = false)
May 25 11:00:56.882: INFO: The status of Pod pfpod is Running (Ready = false)
May 25 11:00:58.774: INFO: The status of Pod pfpod is Running (Ready = true)
STEP: Running 'kubectl port-forward'
May 25 11:00:58.774: INFO: starting port-forward command and streaming output
May 25 11:00:58.774: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=port-forwarding-3423 port-forward --namespace=port-forwarding-3423 pfpod :80'
May 25 11:00:58.775: INFO: reading from `kubectl port-forward` command's stdout
STEP: Dialing the local port
STEP: Reading data from the local port
STEP: Waiting for the target pod to stop running
May 25 11:01:00.868: INFO: Waiting up to 5m0s for pod "pfpod" in namespace "port-forwarding-3423" to be "container terminated"
May 25 11:01:00.878: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=true. Elapsed: 10.116346ms
May 25 11:01:02.886: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=false. Elapsed: 2.017360243s
May 25 11:01:02.886: INFO: Pod "pfpod" satisfied condition "container terminated"
STEP: Verifying logs
STEP: Closing the connection to the local port
[AfterEach] [sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:02.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "port-forwarding-3423" for this suite.
• [SLOW TEST:12.178 seconds]
[sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
With a server listening on 0.0.0.0
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
that expects NO client request
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:462
should support a client that connects, sends DATA, and disconnects
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":1,"skipped":120,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:00:50.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
W0525 11:00:50.882980 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May 25 11:00:50.883: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May 25 11:00:50.886: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support a client that connects, sends DATA, and disconnects
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:479
STEP: Creating the target pod
May 25 11:00:50.902: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true)
May 25 11:00:52.906: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true)
May 25 11:00:54.907: INFO: The status of Pod pfpod is Running (Ready = false)
May 25 11:00:57.188: INFO: The status of Pod pfpod is Running (Ready = false)
May 25 11:00:58.906: INFO: The status of Pod pfpod is Running (Ready = true)
STEP: Running 'kubectl port-forward'
May 25 11:00:58.906: INFO: starting port-forward command and streaming output
May 25 11:00:58.906: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=port-forwarding-9022 port-forward --namespace=port-forwarding-9022 pfpod :80'
May 25 11:00:58.907: INFO: reading from `kubectl port-forward` command's stdout
STEP: Dialing the local port
STEP: Sending the expected data to the local port
STEP: Reading data from the local port
STEP: Closing the write half of the client's connection
STEP: Waiting for the target pod to stop running
May 25 11:01:01.014: INFO: Waiting up to 5m0s for pod "pfpod" in namespace "port-forwarding-9022" to be "container terminated"
May 25 11:01:01.018: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=true. Elapsed: 3.722832ms
May 25 11:01:03.024: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=false. Elapsed: 2.010478072s
May 25 11:01:03.024: INFO: Pod "pfpod" satisfied condition "container terminated"
STEP: Verifying logs
STEP: Closing the connection to the local port
[AfterEach] [sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:03.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "port-forwarding-9022" for this suite.
• [SLOW TEST:12.204 seconds]
[sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
With a server listening on localhost
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
that expects a client request
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
should support a client that connects, sends DATA, and disconnects
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:479
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":1,"skipped":150,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:01:02.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should apply a new configuration to an existing RC
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:794
STEP: creating Agnhost RC
May 25 11:01:02.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-1041 create -f -'
May 25 11:01:03.133: INFO: stderr: ""
May 25 11:01:03.133: INFO: stdout: "replicationcontroller/agnhost-primary created\n"
STEP: applying a modified configuration
May 25 11:01:03.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-1041 apply -f -'
May 25 11:01:03.440: INFO: stderr: "Warning: resource replicationcontrollers/agnhost-primary is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.\n"
May 25 11:01:03.440: INFO: stdout: "replicationcontroller/agnhost-primary configured\n"
STEP: checking the result
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:03.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1041" for this suite.
•S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":2,"skipped":410,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:01:03.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should create a quota with scopes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1787
STEP: calling kubectl quota
May 25 11:01:03.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6527 create quota scopes --hard=pods=1000000 --scopes=BestEffort,NotTerminating'
May 25 11:01:03.775: INFO: stderr: ""
May 25 11:01:03.775: INFO: stdout: "resourcequota/scopes created\n"
STEP: verifying that the quota was created
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:03.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6527" for this suite.
•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":2,"skipped":930,"failed":0}
SSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:00:51.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should create/apply a CR with unknown fields for CRD with no validation schema
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:983
STEP: create CRD with no validation schema
May 25 11:00:51.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: sleep for 10s to wait for potential crd openapi publishing alpha feature
STEP: successfully create CR
May 25 11:01:02.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7882 create --validate=true -f -'
May 25 11:01:02.618: INFO: stderr: ""
May 25 11:01:02.618: INFO: stdout: "e2e-test-kubectl-7880-crd.kubectl.example.com/test-cr created\n"
May 25 11:01:02.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7882 delete e2e-test-kubectl-7880-crds test-cr'
May 25 11:01:02.792: INFO: stderr: ""
May 25 11:01:02.792: INFO: stdout: "e2e-test-kubectl-7880-crd.kubectl.example.com \"test-cr\" deleted\n"
STEP: successfully apply CR
May 25 11:01:02.792: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7882 apply --validate=true -f -'
May 25 11:01:03.132: INFO: stderr: ""
May 25 11:01:03.132: INFO: stdout: "e2e-test-kubectl-7880-crd.kubectl.example.com/test-cr created\n"
May 25 11:01:03.132: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7882 delete e2e-test-kubectl-7880-crds test-cr'
May 25 11:01:03.274: INFO: stderr: ""
May 25 11:01:03.274: INFO: stdout: "e2e-test-kubectl-7880-crd.kubectl.example.com \"test-cr\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:03.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7882" for this suite.
• [SLOW TEST:12.158 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Kubectl client-side validation
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
should create/apply a CR with unknown fields for CRD with no validation schema
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:983
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":2,"skipped":662,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:01:02.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] apply set/view last-applied
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:828
STEP: deployment replicas number is 2
May 25 11:01:02.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-501 apply -f -'
May 25 11:01:02.988: INFO: stderr: ""
May 25 11:01:02.988: INFO: stdout: "deployment.apps/httpd-deployment created\n"
STEP: check the last-applied matches expectations annotations
May 25 11:01:02.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-501 apply view-last-applied -f - -o json'
May 25 11:01:03.099: INFO: stderr: ""
May 25 11:01:03.099: INFO: stdout: "{\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"metadata\": {\n \"annotations\": {},\n \"name\": \"httpd-deployment\",\n \"namespace\": \"kubectl-501\"\n },\n \"spec\": {\n \"replicas\": 2,\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"httpd\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"labels\": {\n \"app\": \"httpd\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-1\",\n \"name\": \"httpd\",\n \"ports\": [\n {\n \"containerPort\": 80\n }\n ]\n }\n ]\n }\n }\n }\n}\n"
STEP: apply file doesn't have replicas
May 25 11:01:03.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-501 apply set-last-applied -f -'
May 25 11:01:03.235: INFO: stderr: ""
May 25 11:01:03.235: INFO: stdout: "deployment.apps/httpd-deployment configured\n"
STEP: check last-applied has been updated, annotations doesn't have replicas
May 25 11:01:03.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-501 apply view-last-applied -f - -o json'
May 25 11:01:03.375: INFO: stderr: ""
May 25 11:01:03.375: INFO: stdout: "{\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"metadata\": {\n \"name\": \"httpd-deployment\",\n \"namespace\": \"kubectl-501\"\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"httpd\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"labels\": {\n \"app\": \"httpd\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-1\",\n \"name\": \"httpd\",\n \"ports\": [\n {\n \"containerPort\": 80\n }\n ]\n }\n ]\n }\n }\n }\n}\n"
STEP: scale set replicas to 3
May 25 11:01:03.378: INFO: scanned /root for discovery docs:
May 25 11:01:03.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-501 scale deployment httpd-deployment --replicas=3'
May 25 11:01:03.500: INFO: stderr: ""
May 25 11:01:03.500: INFO: stdout: "deployment.apps/httpd-deployment scaled\n"
STEP: apply file doesn't have replicas but image changed
May 25 11:01:03.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-501 apply -f -'
May 25 11:01:03.847: INFO: stderr: ""
May 25 11:01:03.847: INFO: stdout: "deployment.apps/httpd-deployment configured\n"
STEP: verify replicas still is 3 and image has been updated
May 25 11:01:03.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-501 get -f - -o json'
May 25 11:01:03.993: INFO: stderr: ""
May 25 11:01:03.993: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"items\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"metadata\": {\n \"annotations\": {\n \"deployment.kubernetes.io/revision\": \"2\",\n \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"httpd-deployment\\\",\\\"namespace\\\":\\\"kubectl-501\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"httpd\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"httpd\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"image\\\":\\\"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\\\",\\\"name\\\":\\\"httpd\\\",\\\"ports\\\":[{\\\"containerPort\\\":80}]}]}}}}\\n\"\n },\n \"creationTimestamp\": \"2021-05-25T11:01:02Z\",\n \"generation\": 4,\n \"name\": \"httpd-deployment\",\n \"namespace\": \"kubectl-501\",\n \"resourceVersion\": \"527287\",\n \"uid\": \"ea213720-e14a-44de-bfa7-be21dda2b8ad\"\n },\n \"spec\": {\n \"progressDeadlineSeconds\": 600,\n \"replicas\": 3,\n \"revisionHistoryLimit\": 10,\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"httpd\"\n }\n },\n \"strategy\": {\n \"rollingUpdate\": {\n \"maxSurge\": \"25%\",\n \"maxUnavailable\": \"25%\"\n },\n \"type\": \"RollingUpdate\"\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"httpd\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"httpd\",\n \"ports\": [\n {\n \"containerPort\": 80,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\"\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"terminationGracePeriodSeconds\": 30\n }\n }\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastTransitionTime\": \"2021-05-25T11:01:03Z\",\n \"lastUpdateTime\": \"2021-05-25T11:01:03Z\",\n \"message\": \"Deployment does not have minimum availability.\",\n \"reason\": \"MinimumReplicasUnavailable\",\n \"status\": \"False\",\n \"type\": \"Available\"\n },\n {\n \"lastTransitionTime\": \"2021-05-25T11:01:02Z\",\n \"lastUpdateTime\": \"2021-05-25T11:01:03Z\",\n \"message\": \"ReplicaSet \\\"httpd-deployment-8584777d8\\\" is progressing.\",\n \"reason\": \"ReplicaSetUpdated\",\n \"status\": \"True\",\n \"type\": \"Progressing\"\n }\n ],\n \"observedGeneration\": 4,\n \"replicas\": 4,\n \"unavailableReplicas\": 4,\n \"updatedReplicas\": 1\n }\n }\n ],\n \"kind\": \"List\",\n \"metadata\": {\n \"resourceVersion\": \"\",\n \"selfLink\": \"\"\n }\n}\n"
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:03.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-501" for this suite.
•SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":2,"skipped":249,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:01:01.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[BeforeEach] Kubectl copy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1347
STEP: creating the pod
May 25 11:01:02.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-9887 create -f -'
May 25 11:01:02.583: INFO: stderr: ""
May 25 11:01:02.583: INFO: stdout: "pod/busybox1 created\n"
May 25 11:01:02.583: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [busybox1]
May 25 11:01:02.583: INFO: Waiting up to 5m0s for pod "busybox1" in namespace "kubectl-9887" to be "running and ready"
May 25 11:01:02.587: INFO: Pod "busybox1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.894047ms
May 25 11:01:04.593: INFO: Pod "busybox1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009548444s
May 25 11:01:06.787: INFO: Pod "busybox1": Phase="Running", Reason="", readiness=true. Elapsed: 4.20436481s
May 25 11:01:06.787: INFO: Pod "busybox1" satisfied condition "running and ready"
May 25 11:01:06.787: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [busybox1]
[It] should copy a file from a running Pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362
STEP: specifying a remote filepath busybox1:/root/foo/bar/foo.bar on the pod
May 25 11:01:06.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-9887 cp busybox1:/root/foo/bar/foo.bar /tmp/copy-foobar241450091'
May 25 11:01:07.380: INFO: stderr: ""
May 25 11:01:07.380: INFO: stdout: "tar: removing leading '/' from member names\n"
STEP: verifying that the contents of the remote file busybox1:/root/foo/bar/foo.bar have been copied to a local file /tmp/copy-foobar241450091
[AfterEach] Kubectl copy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353
STEP: using delete to clean up resources
May 25 11:01:07.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-9887 delete --grace-period=0 --force -f -'
May 25 11:01:07.532: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 25 11:01:07.532: INFO: stdout: "pod \"busybox1\" force deleted\n"
May 25 11:01:07.532: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-9887 get rc,svc -l app=busybox1 --no-headers'
May 25 11:01:07.787: INFO: stderr: "No resources found in kubectl-9887 namespace.\n"
May 25 11:01:07.787: INFO: stdout: ""
May 25 11:01:07.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-9887 get pods -l app=busybox1 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 25 11:01:07.957: INFO: stderr: ""
May 25 11:01:07.957: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:07.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9887" for this suite.
• [SLOW TEST:6.008 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Kubectl copy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1345
should copy a file from a running Pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":2,"skipped":562,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:00:51.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[BeforeEach] Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:378
STEP: creating the pod from
May 25 11:00:51.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 create -f -'
May 25 11:00:51.363: INFO: stderr: ""
May 25 11:00:51.363: INFO: stdout: "pod/httpd created\n"
May 25 11:00:51.363: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
May 25 11:00:51.363: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-7950" to be "running and ready"
May 25 11:00:51.366: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.228101ms
May 25 11:00:53.371: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007961091s
May 25 11:00:55.376: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.01264694s
May 25 11:00:57.780: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.416851951s
May 25 11:00:59.785: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.421925594s
May 25 11:01:01.880: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 10.517585543s
May 25 11:01:01.881: INFO: Pod "httpd" satisfied condition "running and ready"
May 25 11:01:01.881: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should handle in-cluster config
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:636
STEP: adding rbac permissions
May 25 11:01:02.084: INFO: Found ClusterRoles; assuming RBAC is enabled.
STEP: overriding icc with values provided by flags
May 25 11:01:02.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 exec httpd -- /bin/sh -x -c printenv KUBERNETES_SERVICE_HOST'
May 25 11:01:02.596: INFO: stderr: "+ printenv KUBERNETES_SERVICE_HOST\n"
May 25 11:01:02.596: INFO: stdout: "10.96.0.1\n"
May 25 11:01:02.596: INFO: stdout: 10.96.0.1
May 25 11:01:02.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 exec httpd -- /bin/sh -x -c printenv KUBERNETES_SERVICE_PORT'
May 25 11:01:02.913: INFO: stderr: "+ printenv KUBERNETES_SERVICE_PORT\n"
May 25 11:01:02.913: INFO: stdout: "443\n"
May 25 11:01:02.913: INFO: stdout: 443
May 25 11:01:02.913: INFO: copying /usr/local/bin/kubectl to the httpd pod
May 25 11:01:02.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 cp /usr/local/bin/kubectl kubectl-7950/httpd:/tmp/'
May 25 11:01:03.351: INFO: stderr: ""
May 25 11:01:03.351: INFO: stdout: ""
May 25 11:01:03.352: INFO: copying override kubeconfig to the httpd pod
May 25 11:01:03.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 cp /tmp/icc-override987134191/icc-override.kubeconfig kubectl-7950/httpd:/tmp/'
May 25 11:01:03.673: INFO: stderr: ""
May 25 11:01:03.673: INFO: stdout: ""
May 25 11:01:03.674: INFO: copying configmap manifests to the httpd pod
May 25 11:01:03.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 cp /tmp/icc-override987134191/invalid-configmap-with-namespace.yaml kubectl-7950/httpd:/tmp/'
May 25 11:01:03.989: INFO: stderr: ""
May 25 11:01:03.989: INFO: stdout: ""
May 25 11:01:03.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 cp /tmp/icc-override987134191/invalid-configmap-without-namespace.yaml kubectl-7950/httpd:/tmp/'
May 25 11:01:04.327: INFO: stderr: ""
May 25 11:01:04.328: INFO: stdout: ""
STEP: getting pods with in-cluster configs
May 25 11:01:04.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --v=6 2>&1'
May 25 11:01:04.725: INFO: stderr: "+ /tmp/kubectl get pods '--v=6'\n"
May 25 11:01:04.725: INFO: stdout: "I0525 11:01:04.617562 152 merged_client_builder.go:163] Using in-cluster namespace\nI0525 11:01:04.617814 152 merged_client_builder.go:121] Using in-cluster configuration\nI0525 11:01:04.628326 152 round_trippers.go:454] GET https://10.96.0.1:443/api?timeout=32s 200 OK in 9 milliseconds\nI0525 11:01:04.633092 152 round_trippers.go:454] GET https://10.96.0.1:443/apis?timeout=32s 200 OK in 1 milliseconds\nI0525 11:01:04.638092 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/batch/v1beta1?timeout=32s 200 OK in 1 milliseconds\nI0525 11:01:04.638559 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/apps/v1?timeout=32s 200 OK in 1 milliseconds\nI0525 11:01:04.638605 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/events.k8s.io/v1beta1?timeout=32s 200 OK in 2 milliseconds\nI0525 11:01:04.639466 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds\nI0525 11:01:04.640211 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 3 milliseconds\nI0525 11:01:04.640677 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/authorization.k8s.io/v1beta1?timeout=32s 200 OK in 3 milliseconds\nI0525 11:01:04.641643 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/projectcontour.io/v1?timeout=32s 200 OK in 4 milliseconds\nI0525 11:01:04.641665 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/autoscaling/v2beta2?timeout=32s 200 OK in 4 milliseconds\nI0525 11:01:04.642290 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/apiregistration.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds\nI0525 11:01:04.642481 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/storage.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds\nI0525 11:01:04.642752 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/events.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:04.642799 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/authentication.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds\nI0525 11:01:04.642814 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/autoscaling/v1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:04.642963 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/scheduling.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:04.643061 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/certificates.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:04.643070 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/policy/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:04.643361 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/node.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:04.643643 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/discovery.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:04.643720 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/autoscaling/v2beta1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:04.643917 152 round_trippers.go:454] GET https://10.96.0.1:443/api/v1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:04.644571 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/apiextensions.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0525 11:01:04.644795 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/certificates.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0525 11:01:04.644910 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/discovery.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0525 11:01:04.645013 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/batch/v1?timeout=32s 200 OK in 7 milliseconds\nI0525 11:01:04.645223 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/admissionregistration.k8s.io/v1?timeout=32s 200 OK in 8 milliseconds\nI0525 11:01:04.645255 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds\nI0525 11:01:04.645504 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds\nI0525 11:01:04.645910 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds\nI0525 11:01:04.646098 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 8 milliseconds\nI0525 11:01:04.646360 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/projectcontour.io/v1alpha1?timeout=32s 200 OK in 9 milliseconds\nI0525 11:01:04.646817 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/extensions/v1beta1?timeout=32s 200 OK in 9 milliseconds\nI0525 11:01:04.646850 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 9 milliseconds\nI0525 11:01:04.647005 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/apiextensions.k8s.io/v1?timeout=32s 200 OK in 9 milliseconds\nI0525 11:01:04.647007 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/networking.k8s.io/v1?timeout=32s 200 OK in 9 milliseconds\nI0525 11:01:04.647481 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/kubectl.example.com/v1?timeout=32s 200 OK in 10 milliseconds\nI0525 11:01:04.647498 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/coordination.k8s.io/v1beta1?timeout=32s 200 OK in 10 milliseconds\nI0525 11:01:04.647661 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/k8s.cni.cncf.io/v1?timeout=32s 200 OK in 10 milliseconds\nI0525 11:01:04.647670 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 10 milliseconds\nI0525 11:01:04.647801 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/networking.k8s.io/v1beta1?timeout=32s 200 OK in 10 milliseconds\nI0525 11:01:04.648045 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 10 milliseconds\nI0525 11:01:04.648250 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/node.k8s.io/v1?timeout=32s 200 OK in 10 milliseconds\nI0525 11:01:04.648563 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/policy/v1?timeout=32s 200 OK in 11 milliseconds\nI0525 11:01:04.648594 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s 200 OK in 10 milliseconds\nI0525 11:01:04.700361 152 merged_client_builder.go:121] Using in-cluster configuration\nI0525 11:01:04.707547 152 merged_client_builder.go:121] Using in-cluster configuration\nI0525 11:01:04.711621 152 round_trippers.go:454] GET https://10.96.0.1:443/api/v1/namespaces/kubectl-7950/pods?limit=500 200 OK in 3 milliseconds\nNAME READY STATUS RESTARTS AGE\nhttpd 1/1 Running 0 13s\n"
May 25 11:01:04.726: INFO: stdout: I0525 11:01:04.617562 152 merged_client_builder.go:163] Using in-cluster namespace
I0525 11:01:04.617814 152 merged_client_builder.go:121] Using in-cluster configuration
I0525 11:01:04.628326 152 round_trippers.go:454] GET https://10.96.0.1:443/api?timeout=32s 200 OK in 9 milliseconds
I0525 11:01:04.633092 152 round_trippers.go:454] GET https://10.96.0.1:443/apis?timeout=32s 200 OK in 1 milliseconds
I0525 11:01:04.638092 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/batch/v1beta1?timeout=32s 200 OK in 1 milliseconds
I0525 11:01:04.638559 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/apps/v1?timeout=32s 200 OK in 1 milliseconds
I0525 11:01:04.638605 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/events.k8s.io/v1beta1?timeout=32s 200 OK in 2 milliseconds
I0525 11:01:04.639466 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds
I0525 11:01:04.640211 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 3 milliseconds
I0525 11:01:04.640677 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/authorization.k8s.io/v1beta1?timeout=32s 200 OK in 3 milliseconds
I0525 11:01:04.641643 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/projectcontour.io/v1?timeout=32s 200 OK in 4 milliseconds
I0525 11:01:04.641665 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/autoscaling/v2beta2?timeout=32s 200 OK in 4 milliseconds
I0525 11:01:04.642290 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/apiregistration.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds
I0525 11:01:04.642481 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/storage.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds
I0525 11:01:04.642752 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/events.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:04.642799 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/authentication.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds
I0525 11:01:04.642814 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/autoscaling/v1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:04.642963 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/scheduling.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:04.643061 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/certificates.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:04.643070 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/policy/v1beta1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:04.643361 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/node.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:04.643643 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/discovery.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:04.643720 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/autoscaling/v2beta1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:04.643917 152 round_trippers.go:454] GET https://10.96.0.1:443/api/v1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:04.644571 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/apiextensions.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds
I0525 11:01:04.644795 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/certificates.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds
I0525 11:01:04.644910 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/discovery.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds
I0525 11:01:04.645013 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/batch/v1?timeout=32s 200 OK in 7 milliseconds
I0525 11:01:04.645223 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/admissionregistration.k8s.io/v1?timeout=32s 200 OK in 8 milliseconds
I0525 11:01:04.645255 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds
I0525 11:01:04.645504 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds
I0525 11:01:04.645910 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds
I0525 11:01:04.646098 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 8 milliseconds
I0525 11:01:04.646360 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/projectcontour.io/v1alpha1?timeout=32s 200 OK in 9 milliseconds
I0525 11:01:04.646817 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/extensions/v1beta1?timeout=32s 200 OK in 9 milliseconds
I0525 11:01:04.646850 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 9 milliseconds
I0525 11:01:04.647005 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/apiextensions.k8s.io/v1?timeout=32s 200 OK in 9 milliseconds
I0525 11:01:04.647007 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/networking.k8s.io/v1?timeout=32s 200 OK in 9 milliseconds
I0525 11:01:04.647481 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/kubectl.example.com/v1?timeout=32s 200 OK in 10 milliseconds
I0525 11:01:04.647498 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/coordination.k8s.io/v1beta1?timeout=32s 200 OK in 10 milliseconds
I0525 11:01:04.647661 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/k8s.cni.cncf.io/v1?timeout=32s 200 OK in 10 milliseconds
I0525 11:01:04.647670 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 10 milliseconds
I0525 11:01:04.647801 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/networking.k8s.io/v1beta1?timeout=32s 200 OK in 10 milliseconds
I0525 11:01:04.648045 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 10 milliseconds
I0525 11:01:04.648250 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/node.k8s.io/v1?timeout=32s 200 OK in 10 milliseconds
I0525 11:01:04.648563 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/policy/v1?timeout=32s 200 OK in 11 milliseconds
I0525 11:01:04.648594 152 round_trippers.go:454] GET https://10.96.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s 200 OK in 10 milliseconds
I0525 11:01:04.700361 152 merged_client_builder.go:121] Using in-cluster configuration
I0525 11:01:04.707547 152 merged_client_builder.go:121] Using in-cluster configuration
I0525 11:01:04.711621 152 round_trippers.go:454] GET https://10.96.0.1:443/api/v1/namespaces/kubectl-7950/pods?limit=500 200 OK in 3 milliseconds
NAME READY STATUS RESTARTS AGE
httpd 1/1 Running 0 13s
STEP: creating an object containing a namespace with in-cluster config
May 25 11:01:04.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-with-namespace.yaml --v=6 2>&1'
May 25 11:01:05.319: INFO: rc: 255
STEP: creating an object not containing a namespace with in-cluster config
May 25 11:01:05.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
May 25 11:01:05.845: INFO: rc: 255
STEP: trying to use kubectl with invalid token
May 25 11:01:05.846: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
May 25 11:01:06.529: INFO: rc: 255
May 25 11:01:06.529: INFO: got err error running /usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I0525 11:01:06.458220 251 merged_client_builder.go:163] Using in-cluster namespace
I0525 11:01:06.458522 251 merged_client_builder.go:121] Using in-cluster configuration
I0525 11:01:06.461673 251 merged_client_builder.go:121] Using in-cluster configuration
I0525 11:01:06.469081 251 merged_client_builder.go:121] Using in-cluster configuration
I0525 11:01:06.469536 251 round_trippers.go:432] GET https://10.96.0.1:443/api/v1/namespaces/kubectl-7950/pods?limit=500
I0525 11:01:06.469558 251 round_trippers.go:438] Request Headers:
I0525 11:01:06.469571 251 round_trippers.go:442] User-Agent: kubectl/v1.21.1 (linux/amd64) kubernetes/5e58841
I0525 11:01:06.469597 251 round_trippers.go:442] Authorization: Bearer
I0525 11:01:06.469608 251 round_trippers.go:442] Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json
I0525 11:01:06.479073 251 round_trippers.go:457] Response Status: 401 Unauthorized in 9 milliseconds
I0525 11:01:06.479567 251 helpers.go:216] server response object: [{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}]
F0525 11:01:06.479606 251 helpers.go:115] error: You must be logged in to the server (Unauthorized)
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc00046e000, 0x68, 0x1af)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x316f740, 0xc000000003, 0x0, 0x0, 0xc000b4ab60, 0x26cc9dc, 0xa, 0x73, 0x40e300)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x316f740, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0005b6780, 0x1, 0x1)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc000bb5080, 0x3a, 0x1)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x21357e0, 0xc000a0a9a8, 0x1fb10a8)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:177 +0x8a3
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc000aa2dc0, 0xc00049b920, 0x1, 0x3)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get/get.go:167 +0x159
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000aa2dc0, 0xc00049b8f0, 0x3, 0x3, 0xc000aa2dc0, 0xc00049b8f0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00032cb00, 0xc000158120, 0xc00003a0a0, 0x5)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
main.main()
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:49 +0x21d
goroutine 5 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x316f740)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1164 +0x8b
created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:418 +0xdf
goroutine 136 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x1fb0fc8, 0x2133000, 0xc000b20030, 0x1, 0xc000048ba0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x1fb0fc8, 0x12a05f200, 0x0, 0x1, 0xc000048ba0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x1fb0fc8, 0x12a05f200, 0xc000048ba0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs.InitLogs
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96
goroutine 291 [IO wait]:
internal/poll.runtime_pollWait(0x7fe388915248, 0x72, 0xffffffffffffffff)
/usr/local/go/src/runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000b7ce18, 0x72, 0x800, 0x8fb, 0xffffffffffffffff)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000b7ce00, 0xc000cf2000, 0x8fb, 0x8fb, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:166 +0x1d5
net.(*netFD).Read(0xc000b7ce00, 0xc000cf2000, 0x8fb, 0x8fb, 0x8f6, 0xc000cf2000, 0x5)
/usr/local/go/src/net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc000c9c008, 0xc000cf2000, 0x8fb, 0x8fb, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:183 +0x91
crypto/tls.(*atLeastReader).Read(0xc000a0a648, 0xc000cf2000, 0x8fb, 0x8fb, 0x8f6, 0xc000138800, 0x0)
/usr/local/go/src/crypto/tls/conn.go:776 +0x63
bytes.(*Buffer).ReadFrom(0xc000cba278, 0x2131960, 0xc000a0a648, 0x40b985, 0x1c6dc00, 0x1e31040)
/usr/local/go/src/bytes/buffer.go:204 +0xbe
crypto/tls.(*Conn).readFromUntil(0xc000cba000, 0x2134700, 0xc000c9c008, 0x5, 0xc000c9c008, 0x8a)
/usr/local/go/src/crypto/tls/conn.go:798 +0xf3
crypto/tls.(*Conn).readRecordOrCCS(0xc000cba000, 0x0, 0x0, 0xd)
/usr/local/go/src/crypto/tls/conn.go:605 +0x115
crypto/tls.(*Conn).readRecord(...)
/usr/local/go/src/crypto/tls/conn.go:573
crypto/tls.(*Conn).Read(0xc000cba000, 0xc00059e000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/crypto/tls/conn.go:1276 +0x165
bufio.(*Reader).Read(0xc0005649c0, 0xc0002561f8, 0x9, 0x9, 0x9a19cb, 0xc0011b3c78, 0x407005)
/usr/local/go/src/bufio/bufio.go:227 +0x222
io.ReadAtLeast(0x2131780, 0xc0005649c0, 0xc0002561f8, 0x9, 0x9, 0x9, 0xc0005b62c0, 0x55d25b0d0b1500, 0xc0005b62c0)
/usr/local/go/src/io/io.go:328 +0x87
io.ReadFull(...)
/usr/local/go/src/io/io.go:347
k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc0002561f8, 0x9, 0x9, 0x2131780, 0xc0005649c0, 0x0, 0x0, 0x0, 0x0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0002561c0, 0xc000cac960, 0x0, 0x0, 0x0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0011b3fa8, 0x0, 0x0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1819 +0xd8
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc000cb8180)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1741 +0x6f
created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:705 +0x6c5
stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255
error:
exit status 255
STEP: trying to use kubectl with invalid server
May 25 11:01:06.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
May 25 11:01:07.105: INFO: rc: 255
May 25 11:01:07.105: INFO: got err error running /usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I0525 11:01:07.043276 281 merged_client_builder.go:163] Using in-cluster namespace
I0525 11:01:07.080869 281 round_trippers.go:454] GET http://invalid/api?timeout=32s in 36 milliseconds
I0525 11:01:07.080991 281 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: server misbehaving
I0525 11:01:07.084556 281 round_trippers.go:454] GET http://invalid/api?timeout=32s in 3 milliseconds
I0525 11:01:07.084635 281 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: server misbehaving
I0525 11:01:07.084668 281 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: server misbehaving
I0525 11:01:07.087249 281 round_trippers.go:454] GET http://invalid/api?timeout=32s in 2 milliseconds
I0525 11:01:07.087325 281 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: server misbehaving
I0525 11:01:07.089693 281 round_trippers.go:454] GET http://invalid/api?timeout=32s in 2 milliseconds
I0525 11:01:07.089757 281 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: server misbehaving
I0525 11:01:07.092051 281 round_trippers.go:454] GET http://invalid/api?timeout=32s in 2 milliseconds
I0525 11:01:07.092121 281 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: server misbehaving
I0525 11:01:07.092258 281 helpers.go:234] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.96.0.10:53: server misbehaving
F0525 11:01:07.092291 281 helpers.go:115] Unable to connect to the server: dial tcp: lookup invalid on 10.96.0.10:53: server misbehaving
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0001ba001, 0xc000324000, 0x8d, 0x1bd)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x316f740, 0xc000000003, 0x0, 0x0, 0xc000852d20, 0x26cc9dc, 0xa, 0x73, 0x40e300)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x316f740, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0003dc560, 0x1, 0x1)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc00081cde0, 0x5e, 0x1)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2134b00, 0xc0005c5c20, 0x1fb10a8)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x935
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc0004ac2c0, 0xc0002a7680, 0x1, 0x3)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get/get.go:167 +0x159
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0004ac2c0, 0xc0002a7650, 0x3, 0x3, 0xc0004ac2c0, 0xc0002a7650)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0005b6840, 0xc0001bc120, 0xc0001c0000, 0x5)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
main.main()
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:49 +0x21d
goroutine 18 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x316f740)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1164 +0x8b
created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:418 +0xdf
goroutine 96 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x1fb0fc8, 0x2133000, 0xc000748000, 0x1, 0xc000048060)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x1fb0fc8, 0x12a05f200, 0x0, 0x1, 0xc000048060)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x1fb0fc8, 0x12a05f200, 0xc000048060)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs.InitLogs
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96
stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255
error:
exit status 255
STEP: trying to use kubectl with invalid namespace
May 25 11:01:07.105: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
May 25 11:01:07.575: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
May 25 11:01:07.575: INFO: stdout: "I0525 11:01:07.540428 306 merged_client_builder.go:121] Using in-cluster configuration\nI0525 11:01:07.545239 306 merged_client_builder.go:121] Using in-cluster configuration\nI0525 11:01:07.549442 306 merged_client_builder.go:121] Using in-cluster configuration\nI0525 11:01:07.561808 306 round_trippers.go:454] GET https://10.96.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 12 milliseconds\nNo resources found in invalid namespace.\n"
May 25 11:01:07.575: INFO: stdout: I0525 11:01:07.540428 306 merged_client_builder.go:121] Using in-cluster configuration
I0525 11:01:07.545239 306 merged_client_builder.go:121] Using in-cluster configuration
I0525 11:01:07.549442 306 merged_client_builder.go:121] Using in-cluster configuration
I0525 11:01:07.561808 306 round_trippers.go:454] GET https://10.96.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 12 milliseconds
No resources found in invalid namespace.
STEP: trying to use kubectl with kubeconfig
May 25 11:01:07.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --kubeconfig=/tmp/icc-override.kubeconfig --v=6 2>&1'
May 25 11:01:08.073: INFO: stderr: "+ /tmp/kubectl get pods '--kubeconfig=/tmp/icc-override.kubeconfig' '--v=6'\n"
May 25 11:01:08.073: INFO: stdout: "I0525 11:01:07.958536 336 loader.go:372] Config loaded from file: /tmp/icc-override.kubeconfig\nI0525 11:01:07.968877 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s 200 OK in 9 milliseconds\nI0525 11:01:07.972406 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis?timeout=32s 200 OK in 0 milliseconds\nI0525 11:01:07.979012 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/apps/v1?timeout=32s 200 OK in 3 milliseconds\nI0525 11:01:07.979270 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/k8s.cni.cncf.io/v1?timeout=32s 200 OK in 3 milliseconds\nI0525 11:01:07.979274 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/networking.k8s.io/v1beta1?timeout=32s 200 OK in 3 milliseconds\nI0525 11:01:07.979291 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/extensions/v1beta1?timeout=32s 200 OK in 3 milliseconds\nI0525 11:01:07.979787 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/api/v1?timeout=32s 200 OK in 3 milliseconds\nI0525 11:01:07.980192 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/apiextensions.k8s.io/v1beta1?timeout=32s 200 OK in 1 milliseconds\nI0525 11:01:07.980240 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds\nI0525 11:01:07.980920 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/admissionregistration.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds\nI0525 11:01:07.981077 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/kubectl.example.com/v1?timeout=32s 200 OK in 4 milliseconds\nI0525 11:01:07.981437 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/projectcontour.io/v1?timeout=32s 200 OK in 5 milliseconds\nI0525 11:01:07.981630 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/autoscaling/v2beta1?timeout=32s 200 OK in 5 milliseconds\nI0525 11:01:07.981645 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/autoscaling/v2beta2?timeout=32s 200 OK in 5 milliseconds\nI0525 11:01:07.981845 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds\nI0525 11:01:07.981870 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/discovery.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds\nI0525 11:01:07.982015 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/batch/v1?timeout=32s 200 OK in 5 milliseconds\nI0525 11:01:07.982211 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/events.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds\nI0525 11:01:07.982225 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds\nI0525 11:01:07.982752 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/apiregistration.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:07.983074 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/projectcontour.io/v1alpha1?timeout=32s 200 OK in 7 milliseconds\nI0525 11:01:07.983131 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:07.983137 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/policy/v1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:07.983169 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/autoscaling/v1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:07.983622 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds\nI0525 11:01:07.983635 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/policy/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0525 11:01:07.984186 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/certificates.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:07.984437 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:07.984498 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/node.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:07.984793 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/authentication.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds\nI0525 11:01:07.984811 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:07.984797 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/node.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0525 11:01:07.985027 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/coordination.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0525 11:01:07.985032 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/networking.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0525 11:01:07.985386 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/storage.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0525 11:01:07.985755 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/batch/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0525 11:01:07.985780 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/discovery.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0525 11:01:07.985755 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/events.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0525 11:01:07.986151 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/apiextensions.k8s.io/v1?timeout=32s 200 OK in 8 milliseconds\nI0525 11:01:07.986163 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0525 11:01:07.986187 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds\nI0525 11:01:07.986367 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds\nI0525 11:01:07.986643 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/authorization.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds\nI0525 11:01:07.986985 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/certificates.k8s.io/v1?timeout=32s 200 OK in 8 milliseconds\nI0525 11:01:07.986993 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/scheduling.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds\nI0525 11:01:08.061793 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/api/v1/namespaces/default/pods?limit=500 200 OK in 2 milliseconds\nNo resources found in default namespace.\n"
May 25 11:01:08.073: INFO: stdout: I0525 11:01:07.958536 336 loader.go:372] Config loaded from file: /tmp/icc-override.kubeconfig
I0525 11:01:07.968877 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/api?timeout=32s 200 OK in 9 milliseconds
I0525 11:01:07.972406 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis?timeout=32s 200 OK in 0 milliseconds
I0525 11:01:07.979012 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/apps/v1?timeout=32s 200 OK in 3 milliseconds
I0525 11:01:07.979270 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/k8s.cni.cncf.io/v1?timeout=32s 200 OK in 3 milliseconds
I0525 11:01:07.979274 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/networking.k8s.io/v1beta1?timeout=32s 200 OK in 3 milliseconds
I0525 11:01:07.979291 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/extensions/v1beta1?timeout=32s 200 OK in 3 milliseconds
I0525 11:01:07.979787 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/api/v1?timeout=32s 200 OK in 3 milliseconds
I0525 11:01:07.980192 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/apiextensions.k8s.io/v1beta1?timeout=32s 200 OK in 1 milliseconds
I0525 11:01:07.980240 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 4 milliseconds
I0525 11:01:07.980920 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/admissionregistration.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds
I0525 11:01:07.981077 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/kubectl.example.com/v1?timeout=32s 200 OK in 4 milliseconds
I0525 11:01:07.981437 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/projectcontour.io/v1?timeout=32s 200 OK in 5 milliseconds
I0525 11:01:07.981630 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/autoscaling/v2beta1?timeout=32s 200 OK in 5 milliseconds
I0525 11:01:07.981645 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/autoscaling/v2beta2?timeout=32s 200 OK in 5 milliseconds
I0525 11:01:07.981845 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds
I0525 11:01:07.981870 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/discovery.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds
I0525 11:01:07.982015 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/batch/v1?timeout=32s 200 OK in 5 milliseconds
I0525 11:01:07.982211 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/events.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds
I0525 11:01:07.982225 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds
I0525 11:01:07.982752 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/apiregistration.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:07.983074 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/projectcontour.io/v1alpha1?timeout=32s 200 OK in 7 milliseconds
I0525 11:01:07.983131 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:07.983137 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/policy/v1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:07.983169 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/autoscaling/v1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:07.983622 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds
I0525 11:01:07.983635 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/policy/v1beta1?timeout=32s 200 OK in 7 milliseconds
I0525 11:01:07.984186 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/certificates.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:07.984437 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:07.984498 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/node.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:07.984793 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/authentication.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds
I0525 11:01:07.984811 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:07.984797 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/node.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds
I0525 11:01:07.985027 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/coordination.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds
I0525 11:01:07.985032 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/networking.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds
I0525 11:01:07.985386 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/storage.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds
I0525 11:01:07.985755 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/batch/v1beta1?timeout=32s 200 OK in 7 milliseconds
I0525 11:01:07.985780 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/discovery.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds
I0525 11:01:07.985755 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/events.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds
I0525 11:01:07.986151 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/apiextensions.k8s.io/v1?timeout=32s 200 OK in 8 milliseconds
I0525 11:01:07.986163 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds
I0525 11:01:07.986187 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds
I0525 11:01:07.986367 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds
I0525 11:01:07.986643 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/authorization.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds
I0525 11:01:07.986985 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/certificates.k8s.io/v1?timeout=32s 200 OK in 8 milliseconds
I0525 11:01:07.986993 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/apis/scheduling.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds
I0525 11:01:08.061793 336 round_trippers.go:454] GET https://kubernetes.default.svc:443/api/v1/namespaces/default/pods?limit=500 200 OK in 2 milliseconds
No resources found in default namespace.
[AfterEach] Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:384
STEP: using delete to clean up resources
May 25 11:01:08.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 delete --grace-period=0 --force -f -'
May 25 11:01:08.188: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 25 11:01:08.188: INFO: stdout: "pod \"httpd\" force deleted\n"
May 25 11:01:08.188: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 get rc,svc -l name=httpd --no-headers'
May 25 11:01:08.320: INFO: stderr: "No resources found in kubectl-7950 namespace.\n"
May 25 11:01:08.320: INFO: stdout: ""
May 25 11:01:08.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7950 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 25 11:01:08.434: INFO: stderr: ""
May 25 11:01:08.434: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:08.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7950" for this suite.
• [SLOW TEST:17.404 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
should handle in-cluster config
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:636
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":2,"skipped":245,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:01:02.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support forwarding over websockets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468
May 25 11:01:02.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating the pod
May 25 11:01:02.972: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true)
May 25 11:01:04.976: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true)
May 25 11:01:07.083: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true)
May 25 11:01:08.976: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true)
May 25 11:01:10.976: INFO: The status of Pod pfpod is Running (Ready = false)
May 25 11:01:12.978: INFO: The status of Pod pfpod is Running (Ready = false)
May 25 11:01:14.977: INFO: The status of Pod pfpod is Running (Ready = true)
STEP: Sending the expected data to the local port
STEP: Reading data from the local port
STEP: Verifying logs
[AfterEach] [sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:15.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "port-forwarding-9154" for this suite.
• [SLOW TEST:12.107 seconds]
[sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
With a server listening on 0.0.0.0
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
should support forwarding over websockets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":2,"skipped":131,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:01:03.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1027
STEP: prepare CRD with partially-specified validation schema
May 25 11:01:03.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: sleep for 10s to wait for potential crd openapi publishing alpha feature
STEP: successfully create CR
May 25 11:01:14.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8601 create --validate=true -f -'
May 25 11:01:14.726: INFO: stderr: ""
May 25 11:01:14.726: INFO: stdout: "e2e-test-kubectl-9591-crd.kubectl.example.com/test-cr created\n"
May 25 11:01:14.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8601 delete e2e-test-kubectl-9591-crds test-cr'
May 25 11:01:14.853: INFO: stderr: ""
May 25 11:01:14.853: INFO: stdout: "e2e-test-kubectl-9591-crd.kubectl.example.com \"test-cr\" deleted\n"
STEP: successfully apply CR
May 25 11:01:14.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8601 apply --validate=true -f -'
May 25 11:01:15.150: INFO: stderr: ""
May 25 11:01:15.150: INFO: stdout: "e2e-test-kubectl-9591-crd.kubectl.example.com/test-cr created\n"
May 25 11:01:15.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-8601 delete e2e-test-kubectl-9591-crds test-cr'
May 25 11:01:15.309: INFO: stderr: ""
May 25 11:01:15.309: INFO: stdout: "e2e-test-kubectl-9591-crd.kubectl.example.com \"test-cr\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:15.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8601" for this suite.
• [SLOW TEST:12.305 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Kubectl client-side validation
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1027
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":3,"skipped":462,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:01:15.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should check if kubectl describe prints relevant information for cronjob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1187
STEP: creating a cronjob
May 25 11:01:15.448: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-66 create -f -'
May 25 11:01:15.755: INFO: stderr: "Warning: batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob\n"
May 25 11:01:15.755: INFO: stdout: "cronjob.batch/cronjob-test created\n"
STEP: waiting for cronjob to start.
W0525 11:01:15.758964 22 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
STEP: verifying kubectl describe prints
May 25 11:01:15.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-66 describe cronjob cronjob-test'
May 25 11:01:15.907: INFO: stderr: ""
May 25 11:01:15.907: INFO: stdout: "Name: cronjob-test\nNamespace: kubectl-66\nLabels: \nAnnotations: \nSchedule: */1 * * * *\nConcurrency Policy: Allow\nSuspend: False\nSuccessful Job History Limit: 3\nFailed Job History Limit: 1\nStarting Deadline Seconds: 30s\nSelector: \nParallelism: \nCompletions: \nPod Template:\n Labels: \n Containers:\n test:\n Image: k8s.gcr.io/e2e-test-images/busybox:1.29-1\n Port: \n Host Port: \n Args:\n /bin/true\n Environment: \n Mounts: \n Volumes: \nLast Schedule Time: \nActive Jobs: \nEvents: \n"
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:15.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-66" for this suite.
•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":3,"skipped":349,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:01:15.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should reuse port when apply to an existing SVC
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:807
STEP: creating Agnhost SVC
May 25 11:01:15.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7342 create -f -'
May 25 11:01:16.260: INFO: stderr: ""
May 25 11:01:16.260: INFO: stdout: "service/agnhost-primary created\n"
STEP: getting the original port
May 25 11:01:16.260: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7342 get service agnhost-primary -o jsonpath={.spec.ports[0].port}'
May 25 11:01:16.414: INFO: stderr: ""
May 25 11:01:16.414: INFO: stdout: "6379"
STEP: applying the same configuration
May 25 11:01:16.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7342 apply -f -'
May 25 11:01:16.733: INFO: stderr: "Warning: resource services/agnhost-primary is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.\n"
May 25 11:01:16.733: INFO: stdout: "service/agnhost-primary configured\n"
STEP: getting the port after applying configuration
May 25 11:01:16.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-7342 get service agnhost-primary -o jsonpath={.spec.ports[0].port}'
May 25 11:01:16.850: INFO: stderr: ""
May 25 11:01:16.850: INFO: stdout: "6379"
STEP: checking the result
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:16.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7342" for this suite.
•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":4,"skipped":498,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:01:16.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should get componentstatuses
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:781
STEP: getting list of componentstatuses
May 25 11:01:16.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3107 get componentstatuses -o jsonpath={.items[*].metadata.name}'
May 25 11:01:17.142: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
May 25 11:01:17.142: INFO: stdout: "scheduler controller-manager etcd-0"
STEP: getting details of componentstatuses
STEP: getting status of scheduler
May 25 11:01:17.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3107 get componentstatuses scheduler'
May 25 11:01:17.274: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
May 25 11:01:17.274: INFO: stdout: "NAME STATUS MESSAGE ERROR\nscheduler Unhealthy Get \"http://127.0.0.1:10251/healthz\": dial tcp 127.0.0.1:10251: connect: connection refused \n"
STEP: getting status of controller-manager
May 25 11:01:17.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3107 get componentstatuses controller-manager'
May 25 11:01:17.388: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
May 25 11:01:17.388: INFO: stdout: "NAME STATUS MESSAGE ERROR\ncontroller-manager Unhealthy Get \"http://127.0.0.1:10252/healthz\": dial tcp 127.0.0.1:10252: connect: connection refused \n"
STEP: getting status of etcd-0
May 25 11:01:17.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3107 get componentstatuses etcd-0'
May 25 11:01:17.511: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
May 25 11:01:17.511: INFO: stdout: "NAME STATUS MESSAGE ERROR\netcd-0 Healthy {\"health\":\"true\"} \n"
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:17.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3107" for this suite.
•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":5,"skipped":548,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:01:04.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support a client that connects, sends DATA, and disconnects
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:485
STEP: Creating the target pod
May 25 11:01:04.157: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true)
May 25 11:01:06.280: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true)
May 25 11:01:08.164: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true)
May 25 11:01:10.162: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true)
May 25 11:01:12.162: INFO: The status of Pod pfpod is Running (Ready = false)
May 25 11:01:14.162: INFO: The status of Pod pfpod is Running (Ready = true)
STEP: Running 'kubectl port-forward'
May 25 11:01:14.162: INFO: starting port-forward command and streaming output
May 25 11:01:14.162: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=port-forwarding-6459 port-forward --namespace=port-forwarding-6459 pfpod :80'
May 25 11:01:14.163: INFO: reading from `kubectl port-forward` command's stdout
STEP: Dialing the local port
STEP: Reading data from the local port
STEP: Waiting for the target pod to stop running
May 25 11:01:16.278: INFO: Waiting up to 5m0s for pod "pfpod" in namespace "port-forwarding-6459" to be "container terminated"
May 25 11:01:16.282: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=true. Elapsed: 3.883112ms
May 25 11:01:18.287: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=false. Elapsed: 2.008489498s
May 25 11:01:18.287: INFO: Pod "pfpod" satisfied condition "container terminated"
STEP: Verifying logs
STEP: Closing the connection to the local port
[AfterEach] [sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:18.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "port-forwarding-6459" for this suite.
• [SLOW TEST:14.187 seconds]
[sig-cli] Kubectl Port forwarding
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
With a server listening on localhost
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
that expects NO client request
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:484
should support a client that connects, sends DATA, and disconnects
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:485
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":885,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
May 25 11:01:18.381: INFO: Running AfterSuite actions on all nodes
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:01:03.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[BeforeEach] Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:378
STEP: creating the pod from
May 25 11:01:03.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6573 create -f -'
May 25 11:01:03.679: INFO: stderr: ""
May 25 11:01:03.679: INFO: stdout: "pod/httpd created\n"
May 25 11:01:03.679: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
May 25 11:01:03.679: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-6573" to be "running and ready"
May 25 11:01:03.682: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.746166ms
May 25 11:01:05.686: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006896407s
May 25 11:01:07.781: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102116542s
May 25 11:01:09.785: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.106147387s
May 25 11:01:11.790: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.111009752s
May 25 11:01:13.794: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.115238723s
May 25 11:01:15.799: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.11968113s
May 25 11:01:17.803: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 14.123848136s
May 25 11:01:17.803: INFO: Pod "httpd" satisfied condition "running and ready"
May 25 11:01:17.803: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should support exec
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388
STEP: executing a command in the container
May 25 11:01:17.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6573 exec httpd echo running in container'
May 25 11:01:18.041: INFO: stderr: "kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.\n"
May 25 11:01:18.041: INFO: stdout: "running in container\n"
STEP: executing a very long command in the container
May 25 11:01:18.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6573 exec httpd echo aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
May 25 11:01:18.241: INFO: stderr: "kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.\n"
May 25 11:01:18.241: INFO: stdout: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\n"
STEP: executing a command in the container with noninteractive stdin
May 25 11:01:18.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6573 exec -i httpd cat'
May 25 11:01:18.459: INFO: stderr: "kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.\n"
May 25 11:01:18.459: INFO: stdout: "abcd1234"
STEP: executing a command in the container with pseudo-interactive stdin
May 25 11:01:18.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6573 exec -i httpd sh'
May 25 11:01:18.664: INFO: stderr: "kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.\n"
May 25 11:01:18.664: INFO: stdout: "hi\n"
[AfterEach] Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:384
STEP: using delete to clean up resources
May 25 11:01:18.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6573 delete --grace-period=0 --force -f -'
May 25 11:01:18.783: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 25 11:01:18.783: INFO: stdout: "pod \"httpd\" force deleted\n"
May 25 11:01:18.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6573 get rc,svc -l name=httpd --no-headers'
May 25 11:01:18.897: INFO: stderr: "No resources found in kubectl-6573 namespace.\n"
May 25 11:01:18.897: INFO: stdout: ""
May 25 11:01:18.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6573 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 25 11:01:19.008: INFO: stderr: ""
May 25 11:01:19.008: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:19.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6573" for this suite.
• [SLOW TEST:15.709 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
should support exec
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":2,"skipped":326,"failed":0}
May 25 11:01:19.017: INFO: Running AfterSuite actions on all nodes
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:01:04.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[BeforeEach] Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:378
STEP: creating the pod from apiVersion: v1
kind: Pod
metadata:
name: httpd
labels:
name: httpd
spec:
containers:
- name: httpd
image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
timeoutSeconds: 5
May 25 11:01:04.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6209 create -f -'
May 25 11:01:04.379: INFO: stderr: ""
May 25 11:01:04.379: INFO: stdout: "pod/httpd created\n"
May 25 11:01:04.379: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd]
May 25 11:01:04.379: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-6209" to be "running and ready"
May 25 11:01:04.382: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.781619ms
May 25 11:01:06.479: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100166769s
May 25 11:01:08.482: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103244635s
May 25 11:01:10.487: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107988238s
May 25 11:01:12.492: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112518491s
May 25 11:01:14.495: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.115923602s
May 25 11:01:16.499: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.119852806s
May 25 11:01:18.503: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 14.123509922s
May 25 11:01:18.503: INFO: Pod "httpd" satisfied condition "running and ready"
May 25 11:01:18.503: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd]
[It] should support exec through kubectl proxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:470
STEP: Starting kubectl proxy
May 25 11:01:18.503: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6209 proxy -p 0 --disable-filter'
STEP: Running kubectl via kubectl proxy using --server=http://127.0.0.1:44541
May 25 11:01:18.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6209 --server=http://127.0.0.1:44541 --namespace=kubectl-6209 exec httpd echo running in container'
May 25 11:01:18.866: INFO: stderr: "kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.\n"
May 25 11:01:18.866: INFO: stdout: "running in container\n"
[AfterEach] Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:384
STEP: using delete to clean up resources
May 25 11:01:18.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6209 delete --grace-period=0 --force -f -'
May 25 11:01:18.990: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 25 11:01:18.990: INFO: stdout: "pod \"httpd\" force deleted\n"
May 25 11:01:18.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6209 get rc,svc -l name=httpd --no-headers'
May 25 11:01:19.098: INFO: stderr: "No resources found in kubectl-6209 namespace.\n"
May 25 11:01:19.098: INFO: stdout: ""
May 25 11:01:19.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-6209 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May 25 11:01:19.244: INFO: stderr: ""
May 25 11:01:19.244: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:19.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6209" for this suite.
• [SLOW TEST:15.202 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Simple pod
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
should support exec through kubectl proxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:470
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":3,"skipped":266,"failed":0}
May 25 11:01:19.254: INFO: Running AfterSuite actions on all nodes
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:01:08.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should create/apply a valid CR for CRD with validation schema
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1001
STEP: prepare CRD with validation schema
May 25 11:01:08.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: sleep for 10s to wait for potential crd openapi publishing alpha feature
STEP: successfully create CR
May 25 11:01:19.457: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-214 create --validate=true -f -'
May 25 11:01:19.840: INFO: stderr: ""
May 25 11:01:19.840: INFO: stdout: "e2e-test-kubectl-5015-crd.kubectl.example.com/test-cr created\n"
May 25 11:01:19.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-214 delete e2e-test-kubectl-5015-crds test-cr'
May 25 11:01:19.962: INFO: stderr: ""
May 25 11:01:19.962: INFO: stdout: "e2e-test-kubectl-5015-crd.kubectl.example.com \"test-cr\" deleted\n"
STEP: successfully apply CR
May 25 11:01:19.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-214 apply --validate=true -f -'
May 25 11:01:20.322: INFO: stderr: ""
May 25 11:01:20.322: INFO: stdout: "e2e-test-kubectl-5015-crd.kubectl.example.com/test-cr created\n"
May 25 11:01:20.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-214 delete e2e-test-kubectl-5015-crds test-cr'
May 25 11:01:20.439: INFO: stderr: ""
May 25 11:01:20.439: INFO: stdout: "e2e-test-kubectl-5015-crd.kubectl.example.com \"test-cr\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 25 11:01:20.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-214" for this suite.
• [SLOW TEST:12.042 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Kubectl client-side validation
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
should create/apply a valid CR for CRD with validation schema
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1001
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":3,"skipped":506,"failed":0}
May 25 11:01:20.958: INFO: Running AfterSuite actions on all nodes
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 25 11:01:17.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241
[It] should check if cluster-info dump succeeds
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1078
STEP: running cluster-info dump
May 25 11:01:17.812: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:33295 --kubeconfig=/root/.kube/config --namespace=kubectl-3762 cluster-info dump'
May 25 11:01:20.762: INFO: stderr: ""
May 25 11:01:20.777: INFO: stdout: "{\n \"kind\": \"NodeList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"527719\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"v1.21-control-plane\",\n \"uid\": \"c96de4b5-b5f9-4f91-bfc2-2115352cebf6\",\n \"resourceVersion\": \"526306\",\n \"creationTimestamp\": \"2021-05-24T17:23:54Z\",\n \"labels\": {\n \"beta.kubernetes.io/arch\": \"amd64\",\n \"beta.kubernetes.io/os\": \"linux\",\n \"ingress-ready\": \"true\",\n \"kubernetes.io/arch\": \"amd64\",\n \"kubernetes.io/hostname\": \"v1.21-control-plane\",\n \"kubernetes.io/os\": \"linux\",\n \"node-role.kubernetes.io/control-plane\": \"\",\n \"node-role.kubernetes.io/master\": \"\",\n \"node.kubernetes.io/exclude-from-external-load-balancers\": \"\"\n },\n \"annotations\": {\n \"kubeadm.alpha.kubernetes.io/cri-socket\": \"unix:///run/containerd/containerd.sock\",\n \"node.alpha.kubernetes.io/ttl\": \"0\",\n \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n }\n },\n \"spec\": {\n \"podCIDR\": \"10.244.0.0/24\",\n \"podCIDRs\": [\n \"10.244.0.0/24\"\n ],\n \"providerID\": \"kind://docker/v1.21/v1.21-control-plane\",\n \"taints\": [\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n }\n ]\n },\n \"status\": {\n \"capacity\": {\n \"cpu\": \"88\",\n \"ephemeral-storage\": \"459602040Ki\",\n \"hugepages-1Gi\": \"0\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"65849824Ki\",\n \"pods\": \"110\"\n },\n \"allocatable\": {\n \"cpu\": \"88\",\n \"ephemeral-storage\": \"459602040Ki\",\n \"hugepages-1Gi\": \"0\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"65849824Ki\",\n \"pods\": \"110\"\n },\n \"conditions\": [\n {\n \"type\": \"MemoryPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-25T10:57:00Z\",\n \"lastTransitionTime\": \"2021-05-24T17:23:48Z\",\n \"reason\": \"KubeletHasSufficientMemory\",\n \"message\": \"kubelet has sufficient memory available\"\n },\n {\n \"type\": \"DiskPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-25T10:57:00Z\",\n \"lastTransitionTime\": \"2021-05-24T17:23:48Z\",\n \"reason\": \"KubeletHasNoDiskPressure\",\n \"message\": \"kubelet has no disk pressure\"\n },\n {\n \"type\": \"PIDPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-25T10:57:00Z\",\n \"lastTransitionTime\": \"2021-05-24T17:23:48Z\",\n \"reason\": \"KubeletHasSufficientPID\",\n \"message\": \"kubelet has sufficient PID available\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastHeartbeatTime\": \"2021-05-25T10:57:00Z\",\n \"lastTransitionTime\": \"2021-05-24T17:24:29Z\",\n \"reason\": \"KubeletReady\",\n \"message\": \"kubelet is posting ready status\"\n }\n ],\n \"addresses\": [\n {\n \"type\": \"InternalIP\",\n \"address\": \"172.18.0.3\"\n },\n {\n \"type\": \"Hostname\",\n \"address\": \"v1.21-control-plane\"\n }\n ],\n \"daemonEndpoints\": {\n \"kubeletEndpoint\": {\n \"Port\": 10250\n }\n },\n \"nodeInfo\": {\n \"machineID\": \"b1187601652c41a3b6c159b2e850901f\",\n \"systemUUID\": \"c02e3c8f-3b60-418f-b9cb-d607e75a042a\",\n \"bootID\": \"be455131-27dd-43f1-b9be-d55ec4651321\",\n \"kernelVersion\": \"5.4.0-73-generic\",\n \"osImage\": \"Ubuntu 20.10\",\n \"containerRuntimeVersion\": \"containerd://1.5.1\",\n \"kubeletVersion\": \"v1.21.1\",\n \"kubeProxyVersion\": \"v1.21.1\",\n \"operatingSystem\": \"linux\",\n \"architecture\": \"amd64\"\n },\n \"images\": [\n {\n \"names\": [\n \"k8s.gcr.io/kube-proxy:v1.21.1\"\n ],\n \"sizeBytes\": 132714697\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-apiserver:v1.21.1\"\n ],\n \"sizeBytes\": 126834637\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-controller-manager:v1.21.1\"\n ],\n \"sizeBytes\": 121043253\n },\n {\n \"names\": [\n \"docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85\",\n \"docker.io/sirot/netperf-latest:latest\"\n ],\n \"sizeBytes\": 118405146\n },\n {\n \"names\": [\n \"ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed\",\n \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\"\n ],\n \"sizeBytes\": 104808100\n },\n {\n \"names\": [\n \"k8s.gcr.io/etcd:3.4.13-0\"\n ],\n \"sizeBytes\": 86742272\n },\n {\n \"names\": [\n \"docker.io/kindest/kindnetd:v20210326-1e038dc5\"\n ],\n \"sizeBytes\": 53960776\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-scheduler:v1.21.1\"\n ],\n \"sizeBytes\": 51865908\n },\n {\n \"names\": [\n \"docker.io/envoyproxy/envoy@sha256:55d35e368436519136dbd978fa0682c49d8ab99e4d768413510f226762b30b07\",\n \"docker.io/envoyproxy/envoy:v1.18.3\"\n ],\n \"sizeBytes\": 51364868\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n ],\n \"sizeBytes\": 50002177\n },\n {\n \"names\": [\n \"quay.io/metallb/speaker@sha256:c150c6a26a29de43097918f08551bbd2a80de229225866a3814594798089e51c\",\n \"quay.io/metallb/speaker:main\"\n ],\n \"sizeBytes\": 39322460\n },\n {\n \"names\": [\n \"k8s.gcr.io/build-image/debian-base:v2.1.0\"\n ],\n \"sizeBytes\": 21086532\n },\n {\n \"names\": [\n \"docker.io/rancher/local-path-provisioner:v0.0.14\"\n ],\n \"sizeBytes\": 13367922\n },\n {\n \"names\": [\n \"k8s.gcr.io/coredns/coredns:v1.8.0\"\n ],\n \"sizeBytes\": 12945155\n },\n {\n \"names\": [\n \"docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e\",\n \"docker.io/projectcontour/contour:v1.15.1\"\n ],\n \"sizeBytes\": 11888781\n },\n {\n \"names\": [\n \"docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6\",\n \"docker.io/aquasec/kube-bench:0.3.1\"\n ],\n \"sizeBytes\": 8042926\n },\n {\n \"names\": [\n \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"docker.io/library/alpine:3.6\"\n ],\n \"sizeBytes\": 2021226\n },\n {\n \"names\": [\n \"k8s.gcr.io/pause:3.4.1\"\n ],\n \"sizeBytes\": 301268\n }\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"v1.21-worker\",\n \"uid\": \"7394ad8f-d04e-40e6-b9a3-93f7805657f7\",\n \"resourceVersion\": \"526670\",\n \"creationTimestamp\": \"2021-05-24T17:24:25Z\",\n \"labels\": {\n \"beta.kubernetes.io/arch\": \"amd64\",\n \"beta.kubernetes.io/os\": \"linux\",\n \"io.kubernetes.storage.mock/node\": \"some-mock-node\",\n \"kubernetes.io/arch\": \"amd64\",\n \"kubernetes.io/hostname\": \"v1.21-worker\",\n \"kubernetes.io/os\": \"linux\"\n },\n \"annotations\": {\n \"kubeadm.alpha.kubernetes.io/cri-socket\": \"unix:///run/containerd/containerd.sock\",\n \"node.alpha.kubernetes.io/ttl\": \"0\",\n \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n }\n },\n \"spec\": {\n \"podCIDR\": \"10.244.1.0/24\",\n \"podCIDRs\": [\n \"10.244.1.0/24\"\n ],\n \"providerID\": \"kind://docker/v1.21/v1.21-worker\"\n },\n \"status\": {\n \"capacity\": {\n \"cpu\": \"88\",\n \"ephemeral-storage\": \"459602040Ki\",\n \"example.com/fakePTSRes\": \"10\",\n \"hugepages-1Gi\": \"0\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"65849824Ki\",\n \"pods\": \"110\",\n \"scheduling.k8s.io/foo\": \"3\"\n },\n \"allocatable\": {\n \"cpu\": \"88\",\n \"ephemeral-storage\": \"459602040Ki\",\n \"example.com/fakePTSRes\": \"10\",\n \"hugepages-1Gi\": \"0\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"65849824Ki\",\n \"pods\": \"110\",\n \"scheduling.k8s.io/foo\": \"3\"\n },\n \"conditions\": [\n {\n \"type\": \"MemoryPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-25T10:59:31Z\",\n \"lastTransitionTime\": \"2021-05-24T17:24:25Z\",\n \"reason\": \"KubeletHasSufficientMemory\",\n \"message\": \"kubelet has sufficient memory available\"\n },\n {\n \"type\": \"DiskPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-25T10:59:31Z\",\n \"lastTransitionTime\": \"2021-05-24T17:24:25Z\",\n \"reason\": \"KubeletHasNoDiskPressure\",\n \"message\": \"kubelet has no disk pressure\"\n },\n {\n \"type\": \"PIDPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-25T10:59:31Z\",\n \"lastTransitionTime\": \"2021-05-24T17:24:25Z\",\n \"reason\": \"KubeletHasSufficientPID\",\n \"message\": \"kubelet has sufficient PID available\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastHeartbeatTime\": \"2021-05-25T10:59:31Z\",\n \"lastTransitionTime\": \"2021-05-24T17:24:45Z\",\n \"reason\": \"KubeletReady\",\n \"message\": \"kubelet is posting ready status\"\n }\n ],\n \"addresses\": [\n {\n \"type\": \"InternalIP\",\n \"address\": \"172.18.0.4\"\n },\n {\n \"type\": \"Hostname\",\n \"address\": \"v1.21-worker\"\n }\n ],\n \"daemonEndpoints\": {\n \"kubeletEndpoint\": {\n \"Port\": 10250\n }\n },\n \"nodeInfo\": {\n \"machineID\": \"6c596af55cd744d09557af9da18eac0b\",\n \"systemUUID\": \"446986db-9f1d-427d-b0d8-ddb4009b52ad\",\n \"bootID\": \"be455131-27dd-43f1-b9be-d55ec4651321\",\n \"kernelVersion\": \"5.4.0-73-generic\",\n \"osImage\": \"Ubuntu 20.10\",\n \"containerRuntimeVersion\": \"containerd://1.5.1\",\n \"kubeletVersion\": \"v1.21.1\",\n \"kubeProxyVersion\": \"v1.21.1\",\n \"operatingSystem\": \"linux\",\n \"architecture\": \"amd64\"\n },\n \"images\": [\n {\n \"names\": [\n \"docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1\",\n \"docker.io/ollivier/clearwater-homer:hunter\"\n ],\n \"sizeBytes\": 344304298\n },\n {\n \"names\": [\n \"docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8\",\n \"docker.io/ollivier/clearwater-astaire:hunter\"\n ],\n \"sizeBytes\": 327310970\n },\n {\n \"names\": [\n \"docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254\",\n \"docker.io/ollivier/clearwater-bono:hunter\"\n ],\n \"sizeBytes\": 303708624\n },\n {\n \"names\": [\n \"docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49\",\n \"docker.io/ollivier/clearwater-sprout:hunter\"\n ],\n \"sizeBytes\": 298627136\n },\n {\n \"names\": [\n \"docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9\",\n \"docker.io/ollivier/clearwater-homestead:hunter\"\n ],\n \"sizeBytes\": 295167572\n },\n {\n \"names\": [\n \"docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f\",\n \"docker.io/ollivier/clearwater-ralf:hunter\"\n ],\n \"sizeBytes\": 287441316\n },\n {\n \"names\": [\n \"docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d\",\n \"docker.io/ollivier/clearwater-chronos:hunter\"\n ],\n \"sizeBytes\": 285504787\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-proxy:v1.21.1\"\n ],\n \"sizeBytes\": 132714697\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-apiserver:v1.21.1\"\n ],\n \"sizeBytes\": 126834637\n },\n {\n \"names\": [\n \"docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781\",\n \"docker.io/aquasec/kube-hunter:0.3.1\"\n ],\n \"sizeBytes\": 124684106\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-controller-manager:v1.21.1\"\n ],\n \"sizeBytes\": 121043253\n },\n {\n \"names\": [\n \"docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85\",\n \"docker.io/sirot/netperf-latest:latest\"\n ],\n \"sizeBytes\": 118405146\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89\",\n \"k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4\"\n ],\n \"sizeBytes\": 112029652\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d\",\n \"k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0\"\n ],\n \"sizeBytes\": 111199402\n },\n {\n \"names\": [\n \"ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed\",\n \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\"\n ],\n \"sizeBytes\": 104808100\n },\n {\n \"names\": [\n \"k8s.gcr.io/etcd:3.4.13-0\"\n ],\n \"sizeBytes\": 86742272\n },\n {\n \"names\": [\n \"docker.io/kindest/kindnetd:v20210326-1e038dc5\"\n ],\n \"sizeBytes\": 53960776\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-scheduler:v1.21.1\"\n ],\n \"sizeBytes\": 51865908\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n ],\n \"sizeBytes\": 50002177\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a\",\n \"k8s.gcr.io/e2e-test-images/nautilus:1.4\"\n ],\n \"sizeBytes\": 49230179\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0\",\n \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-1\"\n ],\n \"sizeBytes\": 41902332\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n ],\n \"sizeBytes\": 40765006\n },\n {\n \"names\": [\n \"k8s.gcr.io/build-image/debian-iptables@sha256:f30d057c09dda8b8c1d4e48864c2074d49b67c59856118be2134636053803d6d\",\n \"k8s.gcr.io/build-image/debian-iptables:buster-v1.6.0\"\n ],\n \"sizeBytes\": 40403807\n },\n {\n \"names\": [\n \"quay.io/metallb/speaker@sha256:c150c6a26a29de43097918f08551bbd2a80de229225866a3814594798089e51c\",\n \"quay.io/metallb/speaker:main\"\n ],\n \"sizeBytes\": 39322460\n },\n {\n \"names\": [\n \"docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58\",\n \"docker.io/ollivier/clearwater-live-test:hunter\"\n ],\n \"sizeBytes\": 39060692\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276\",\n \"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4\"\n ],\n \"sizeBytes\": 24757245\n },\n {\n \"names\": [\n \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0\"\n ],\n \"sizeBytes\": 21205045\n },\n {\n \"names\": [\n \"k8s.gcr.io/build-image/debian-base:v2.1.0\"\n ],\n \"sizeBytes\": 21086532\n },\n {\n \"names\": [\n \"k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n \"k8s.gcr.io/sig-storage/csi-attacher:v2.2.0\"\n ],\n \"sizeBytes\": 18451536\n },\n {\n \"names\": [\n \"k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c\",\n \"k8s.gcr.io/sig-storage/csi-resizer:v0.5.0\"\n ],\n \"sizeBytes\": 18412631\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b\",\n \"k8s.gcr.io/e2e-test-images/nonroot:1.1\"\n ],\n \"sizeBytes\": 17748448\n },\n {\n \"names\": [\n \"docker.io/rancher/local-path-provisioner:v0.0.14\"\n ],\n \"sizeBytes\": 13367922\n },\n {\n \"names\": [\n \"k8s.gcr.io/coredns/coredns:v1.8.0\"\n ],\n \"sizeBytes\": 12945155\n },\n {\n \"names\": [\n \"docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e\",\n \"docker.io/projectcontour/contour:v1.15.1\"\n ],\n \"sizeBytes\": 11888781\n },\n {\n \"names\": [\n \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n ],\n \"sizeBytes\": 9068367\n },\n {\n \"names\": [\n \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n ],\n \"sizeBytes\": 8223849\n },\n {\n \"names\": [\n \"quay.io/coreos/etcd:v2.2.5\"\n ],\n \"sizeBytes\": 7670543\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n ],\n \"sizeBytes\": 6979365\n },\n {\n \"names\": [\n \"gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e\",\n \"gcr.io/google-samples/hello-go-gke:1.0\"\n ],\n \"sizeBytes\": 4381769\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac\",\n \"k8s.gcr.io/e2e-test-images/nonewprivs:1.3\"\n ],\n \"sizeBytes\": 3263463\n },\n {\n \"names\": [\n \"docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb\",\n \"docker.io/appropriate/curl:edge\"\n ],\n \"sizeBytes\": 2854657\n },\n {\n \"names\": [\n \"gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0\",\n \"gcr.io/authenticated-image-pulling/alpine:3.7\"\n ],\n \"sizeBytes\": 2110879\n },\n {\n \"names\": [\n \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"docker.io/library/alpine:3.6\"\n ],\n \"sizeBytes\": 2021226\n },\n {\n \"names\": [\n \"k8s.gcr.io/busybox:latest\"\n ],\n \"sizeBytes\": 1144547\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n ],\n \"sizeBytes\": 732746\n },\n {\n \"names\": [\n \"docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47\",\n \"docker.io/library/busybox:1.28\"\n ],\n \"sizeBytes\": 727869\n },\n {\n \"names\": [\n \"k8s.gcr.io/pause:3.4.1\"\n ],\n \"sizeBytes\": 301268\n },\n {\n \"names\": [\n \"k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa\",\n \"k8s.gcr.io/pause:3.3\"\n ],\n \"sizeBytes\": 299480\n }\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"v1.21-worker2\",\n \"uid\": \"864d1a2e-0c98-4f5e-990f-7d3409f7a5fb\",\n \"resourceVersion\": \"526669\",\n \"creationTimestamp\": \"2021-05-24T17:24:25Z\",\n \"labels\": {\n \"beta.kubernetes.io/arch\": \"amd64\",\n \"beta.kubernetes.io/os\": \"linux\",\n \"io.kubernetes.storage.mock/node\": \"some-mock-node\",\n \"kubernetes.io/arch\": \"amd64\",\n \"kubernetes.io/hostname\": \"v1.21-worker2\",\n \"kubernetes.io/os\": \"linux\"\n },\n \"annotations\": {\n \"kubeadm.alpha.kubernetes.io/cri-socket\": \"unix:///run/containerd/containerd.sock\",\n \"node.alpha.kubernetes.io/ttl\": \"0\",\n \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n }\n },\n \"spec\": {\n \"podCIDR\": \"10.244.2.0/24\",\n \"podCIDRs\": [\n \"10.244.2.0/24\"\n ],\n \"providerID\": \"kind://docker/v1.21/v1.21-worker2\"\n },\n \"status\": {\n \"capacity\": {\n \"cpu\": \"88\",\n \"ephemeral-storage\": \"459602040Ki\",\n \"example.com/fakePTSRes\": \"10\",\n \"hugepages-1Gi\": \"0\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"65849824Ki\",\n \"pods\": \"110\",\n \"scheduling.k8s.io/foo\": \"3\"\n },\n \"allocatable\": {\n \"cpu\": \"88\",\n \"ephemeral-storage\": \"459602040Ki\",\n \"example.com/fakePTSRes\": \"10\",\n \"hugepages-1Gi\": \"0\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"65849824Ki\",\n \"pods\": \"110\",\n \"scheduling.k8s.io/foo\": \"3\"\n },\n \"conditions\": [\n {\n \"type\": \"MemoryPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-25T10:59:31Z\",\n \"lastTransitionTime\": \"2021-05-24T17:24:25Z\",\n \"reason\": \"KubeletHasSufficientMemory\",\n \"message\": \"kubelet has sufficient memory available\"\n },\n {\n \"type\": \"DiskPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-25T10:59:31Z\",\n \"lastTransitionTime\": \"2021-05-24T17:24:25Z\",\n \"reason\": \"KubeletHasNoDiskPressure\",\n \"message\": \"kubelet has no disk pressure\"\n },\n {\n \"type\": \"PIDPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-25T10:59:31Z\",\n \"lastTransitionTime\": \"2021-05-24T17:24:25Z\",\n \"reason\": \"KubeletHasSufficientPID\",\n \"message\": \"kubelet has sufficient PID available\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastHeartbeatTime\": \"2021-05-25T10:59:31Z\",\n \"lastTransitionTime\": \"2021-05-24T17:24:45Z\",\n \"reason\": \"KubeletReady\",\n \"message\": \"kubelet is posting ready status\"\n }\n ],\n \"addresses\": [\n {\n \"type\": \"InternalIP\",\n \"address\": \"172.18.0.2\"\n },\n {\n \"type\": \"Hostname\",\n \"address\": \"v1.21-worker2\"\n }\n ],\n \"daemonEndpoints\": {\n \"kubeletEndpoint\": {\n \"Port\": 10250\n }\n },\n \"nodeInfo\": {\n \"machineID\": \"7271b9be492d40e783c234175c73454b\",\n \"systemUUID\": \"cae5b89b-7f40-42f9-9224-827b32be62ee\",\n \"bootID\": \"be455131-27dd-43f1-b9be-d55ec4651321\",\n \"kernelVersion\": \"5.4.0-73-generic\",\n \"osImage\": \"Ubuntu 20.10\",\n \"containerRuntimeVersion\": \"containerd://1.5.1\",\n \"kubeletVersion\": \"v1.21.1\",\n \"kubeProxyVersion\": \"v1.21.1\",\n \"operatingSystem\": \"linux\",\n \"architecture\": \"amd64\"\n },\n \"images\": [\n {\n \"names\": [\n \"docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1\",\n \"docker.io/ollivier/clearwater-cassandra:hunter\"\n ],\n \"sizeBytes\": 386500834\n },\n {\n \"names\": [\n \"docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4\",\n \"docker.io/ollivier/clearwater-homestead-prov:hunter\"\n ],\n \"sizeBytes\": 360721934\n },\n {\n \"names\": [\n \"docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc\",\n \"docker.io/ollivier/clearwater-ellis:hunter\"\n ],\n \"sizeBytes\": 351519591\n },\n {\n \"names\": [\n \"docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1\",\n \"docker.io/ollivier/clearwater-homer:hunter\"\n ],\n \"sizeBytes\": 344304298\n },\n {\n \"names\": [\n \"docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8\",\n \"docker.io/ollivier/clearwater-astaire:hunter\"\n ],\n \"sizeBytes\": 327310970\n },\n {\n \"names\": [\n \"docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254\",\n \"docker.io/ollivier/clearwater-bono:hunter\"\n ],\n \"sizeBytes\": 303708624\n },\n {\n \"names\": [\n \"docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9\",\n \"docker.io/ollivier/clearwater-homestead:hunter\"\n ],\n \"sizeBytes\": 295167572\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-proxy:v1.21.1\"\n ],\n \"sizeBytes\": 132714697\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-apiserver:v1.21.1\"\n ],\n \"sizeBytes\": 126834637\n },\n {\n \"names\": [\n \"docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781\",\n \"docker.io/aquasec/kube-hunter:0.3.1\"\n ],\n \"sizeBytes\": 124684106\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-controller-manager:v1.21.1\"\n ],\n \"sizeBytes\": 121043253\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89\",\n \"k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4\"\n ],\n \"sizeBytes\": 112029652\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d\",\n \"k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0\"\n ],\n \"sizeBytes\": 111199402\n },\n {\n \"names\": [\n \"ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed\",\n \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\"\n ],\n \"sizeBytes\": 104808100\n },\n {\n \"names\": [\n \"k8s.gcr.io/etcd:3.4.13-0\"\n ],\n \"sizeBytes\": 86742272\n },\n {\n \"names\": [\n \"docker.io/kubernetesui/dashboard@sha256:148991563e374c83b75e8c51bca75f512d4f006ddc791e96a91f1c7420b60bd9\",\n \"docker.io/kubernetesui/dashboard:v2.2.0\"\n ],\n \"sizeBytes\": 67775224\n },\n {\n \"names\": [\n \"docker.io/kindest/kindnetd:v20210326-1e038dc5\"\n ],\n \"sizeBytes\": 53960776\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-scheduler:v1.21.1\"\n ],\n \"sizeBytes\": 51865908\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\",\n \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n ],\n \"sizeBytes\": 50002177\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a\",\n \"k8s.gcr.io/e2e-test-images/nautilus:1.4\"\n ],\n \"sizeBytes\": 49230179\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0\",\n \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-1\"\n ],\n \"sizeBytes\": 41902332\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n ],\n \"sizeBytes\": 40765006\n },\n {\n \"names\": [\n \"quay.io/metallb/speaker@sha256:c150c6a26a29de43097918f08551bbd2a80de229225866a3814594798089e51c\",\n \"quay.io/metallb/speaker:main\"\n ],\n \"sizeBytes\": 39322460\n },\n {\n \"names\": [\n \"docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58\",\n \"docker.io/ollivier/clearwater-live-test:hunter\"\n ],\n \"sizeBytes\": 39060692\n },\n {\n \"names\": [\n \"quay.io/metallb/controller@sha256:68c52b5301b42cad0cbf497f3d83c2e18b82548a9c36690b99b2023c55cb715a\",\n \"quay.io/metallb/controller:main\"\n ],\n \"sizeBytes\": 35989620\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276\",\n \"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4\"\n ],\n \"sizeBytes\": 24757245\n },\n {\n \"names\": [\n \"k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2\",\n \"k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0\"\n ],\n \"sizeBytes\": 21205045\n },\n {\n \"names\": [\n \"k8s.gcr.io/build-image/debian-base:v2.1.0\"\n ],\n \"sizeBytes\": 21086532\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf\",\n \"k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2\"\n ],\n \"sizeBytes\": 18651485\n },\n {\n \"names\": [\n \"k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n \"k8s.gcr.io/sig-storage/csi-attacher:v2.2.0\"\n ],\n \"sizeBytes\": 18451536\n },\n {\n \"names\": [\n \"k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c\",\n \"k8s.gcr.io/sig-storage/csi-resizer:v0.5.0\"\n ],\n \"sizeBytes\": 18412631\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b\",\n \"k8s.gcr.io/e2e-test-images/nonroot:1.1\"\n ],\n \"sizeBytes\": 17748448\n },\n {\n \"names\": [\n \"docker.io/kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7\",\n \"docker.io/kubernetesui/metrics-scraper:v1.0.6\"\n ],\n \"sizeBytes\": 15079854\n },\n {\n \"names\": [\n \"docker.io/rancher/local-path-provisioner:v0.0.14\"\n ],\n \"sizeBytes\": 13367922\n },\n {\n \"names\": [\n \"k8s.gcr.io/coredns/coredns:v1.8.0\"\n ],\n \"sizeBytes\": 12945155\n },\n {\n \"names\": [\n \"docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e\",\n \"docker.io/projectcontour/contour:v1.15.1\"\n ],\n \"sizeBytes\": 11888781\n },\n {\n \"names\": [\n \"k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f\",\n \"k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0\"\n ],\n \"sizeBytes\": 9068367\n },\n {\n \"names\": [\n \"k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793\",\n \"k8s.gcr.io/sig-storage/mock-driver:v4.1.0\"\n ],\n \"sizeBytes\": 8223849\n },\n {\n \"names\": [\n \"docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6\",\n \"docker.io/aquasec/kube-bench:0.3.1\"\n ],\n \"sizeBytes\": 8042926\n },\n {\n \"names\": [\n \"quay.io/coreos/etcd:v2.2.5\"\n ],\n \"sizeBytes\": 7670543\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b\",\n \"k8s.gcr.io/e2e-test-images/nginx:1.14-1\"\n ],\n \"sizeBytes\": 6979365\n },\n {\n \"names\": [\n \"gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e\",\n \"gcr.io/google-samples/hello-go-gke:1.0\"\n ],\n \"sizeBytes\": 4381769\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac\",\n \"k8s.gcr.io/e2e-test-images/nonewprivs:1.3\"\n ],\n \"sizeBytes\": 3263463\n },\n {\n \"names\": [\n \"docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb\",\n \"docker.io/appropriate/curl:edge\"\n ],\n \"sizeBytes\": 2854657\n },\n {\n \"names\": [\n \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"docker.io/library/alpine:3.6\"\n ],\n \"sizeBytes\": 2021226\n },\n {\n \"names\": [\n \"k8s.gcr.io/busybox:latest\"\n ],\n \"sizeBytes\": 1144547\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592\",\n \"k8s.gcr.io/e2e-test-images/busybox:1.29-1\"\n ],\n \"sizeBytes\": 732746\n },\n {\n \"names\": [\n \"docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47\",\n \"docker.io/library/busybox:1.28\"\n ],\n \"sizeBytes\": 727869\n },\n {\n \"names\": [\n \"k8s.gcr.io/pause:3.4.1\"\n ],\n \"sizeBytes\": 301268\n },\n {\n \"names\": [\n \"k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa\",\n \"k8s.gcr.io/pause:3.3\"\n ],\n \"sizeBytes\": 299480\n }\n ]\n }\n }\n ]\n}\n{\n \"kind\": \"EventList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"527719\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"critical-pod.168248cde7b0eef9\",\n \"namespace\": \"kube-system\",\n \"uid\": \"12d749cb-ee06-4f43-9cf1-b16aed44346f\",\n \"resourceVersion\": \"512538\",\n \"creationTimestamp\": \"2021-05-25T10:34:25Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"652e2d97-36e7-4476-8ada-1777a9a8e4b0\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"512536\"\n },\n \"reason\": \"FailedScheduling\",\n \"message\": \"0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient scheduling.k8s.io/foo.\",\n \"source\": {},\n \"firstTimestamp\": null,\n \"lastTimestamp\": null,\n \"type\": \"Warning\",\n \"eventTime\": \"2021-05-25T10:34:24.999424Z\",\n \"action\": \"Scheduling\",\n \"reportingComponent\": \"default-scheduler\",\n \"reportingInstance\": \"default-scheduler-v1.21-control-plane\"\n },\n {\n \"metadata\": {\n \"name\": \"critical-pod.168248ce33975c70\",\n \"namespace\": \"kube-system\",\n \"uid\": \"56faf346-f043-4cb6-9d5d-d79eb8c68235\",\n \"resourceVersion\": \"512544\",\n \"creationTimestamp\": \"2021-05-25T10:34:26Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"652e2d97-36e7-4476-8ada-1777a9a8e4b0\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"512540\"\n },\n \"reason\": \"FailedScheduling\",\n \"message\": \"0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient scheduling.k8s.io/foo.\",\n \"source\": {},\n \"firstTimestamp\": null,\n \"lastTimestamp\": null,\n \"type\": \"Warning\",\n \"eventTime\": \"2021-05-25T10:34:26.272806Z\",\n \"action\": \"Scheduling\",\n \"reportingComponent\": \"default-scheduler\",\n \"reportingInstance\": \"default-scheduler-v1.21-control-plane\"\n },\n {\n \"metadata\": {\n \"name\": \"critical-pod.168248d0477ebaab\",\n \"namespace\": \"kube-system\",\n \"uid\": \"4f10d928-a2af-4abb-a19f-b5c67a637fd3\",\n \"resourceVersion\": \"512571\",\n \"creationTimestamp\": \"2021-05-25T10:34:35Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"652e2d97-36e7-4476-8ada-1777a9a8e4b0\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"512540\"\n },\n \"reason\": \"Scheduled\",\n \"message\": \"Successfully assigned kube-system/critical-pod to v1.21-worker\",\n \"source\": {},\n \"firstTimestamp\": null,\n \"lastTimestamp\": null,\n \"type\": \"Normal\",\n \"eventTime\": \"2021-05-25T10:34:35.196670Z\",\n \"action\": \"Binding\",\n \"reportingComponent\": \"default-scheduler\",\n \"reportingInstance\": \"default-scheduler-v1.21-control-plane\"\n },\n {\n \"metadata\": {\n \"name\": \"critical-pod.168248d064686eea\",\n \"namespace\": \"kube-system\",\n \"uid\": \"4fb3ead1-8982-4496-9ae0-595e1067e0f7\",\n \"resourceVersion\": \"512574\",\n \"creationTimestamp\": \"2021-05-25T10:34:35Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"652e2d97-36e7-4476-8ada-1777a9a8e4b0\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"512572\"\n },\n \"reason\": \"AddedInterface\",\n \"message\": \"Add eth0 [10.244.1.249/24]\",\n \"source\": {\n \"component\": \"multus\"\n },\n \"firstTimestamp\": \"2021-05-25T10:34:35Z\",\n \"lastTimestamp\": \"2021-05-25T10:34:35Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"critical-pod.168248d0a57766f6\",\n \"namespace\": \"kube-system\",\n \"uid\": \"405eabfb-78e4-4e9a-b538-ca3fea261146\",\n \"resourceVersion\": \"512577\",\n \"creationTimestamp\": \"2021-05-25T10:34:36Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"652e2d97-36e7-4476-8ada-1777a9a8e4b0\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"512570\",\n \"fieldPath\": \"spec.containers{critical-pod}\"\n },\n \"reason\": \"Pulled\",\n \"message\": \"Container image \\\"k8s.gcr.io/pause:3.4.1\\\" already present on machine\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"v1.21-worker\"\n },\n \"firstTimestamp\": \"2021-05-25T10:34:36Z\",\n \"lastTimestamp\": \"2021-05-25T10:34:36Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"critical-pod.168248d0b8ed3937\",\n \"namespace\": \"kube-system\",\n \"uid\": \"7eb74357-ae1d-4f39-8313-deb9c33881c5\",\n \"resourceVersion\": \"512580\",\n \"creationTimestamp\": \"2021-05-25T10:34:37Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"652e2d97-36e7-4476-8ada-1777a9a8e4b0\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"512570\",\n \"fieldPath\": \"spec.containers{critical-pod}\"\n },\n \"reason\": \"Created\",\n \"message\": \"Created container critical-pod\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"v1.21-worker\"\n },\n \"firstTimestamp\": \"2021-05-25T10:34:37Z\",\n \"lastTimestamp\": \"2021-05-25T10:34:37Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"critical-pod.168248d0c10aae60\",\n \"namespace\": \"kube-system\",\n \"uid\": \"028267de-408d-4aae-b214-f01bdbbf0248\",\n \"resourceVersion\": \"512581\",\n \"creationTimestamp\": \"2021-05-25T10:34:37Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"652e2d97-36e7-4476-8ada-1777a9a8e4b0\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"512570\",\n \"fieldPath\": \"spec.containers{critical-pod}\"\n },\n \"reason\": \"Started\",\n \"message\": \"Started container critical-pod\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"v1.21-worker\"\n },\n \"firstTimestamp\": \"2021-05-25T10:34:37Z\",\n \"lastTimestamp\": \"2021-05-25T10:34:37Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"critical-pod.168248d12b86a407\",\n \"namespace\": \"kube-system\",\n \"uid\": \"1cf8cad3-b76a-4a13-83d7-9ec32a55d124\",\n \"resourceVersion\": \"512588\",\n \"creationTimestamp\": \"2021-05-25T10:34:39Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"652e2d97-36e7-4476-8ada-1777a9a8e4b0\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"512570\",\n \"fieldPath\": \"spec.containers{critical-pod}\"\n },\n \"reason\": \"Killing\",\n \"message\": \"Stopping container critical-pod\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"v1.21-worker\"\n },\n \"firstTimestamp\": \"2021-05-25T10:34:39Z\",\n \"lastTimestamp\": \"2021-05-25T10:34:39Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"etcd-v1.21-control-plane.16822374535f93d3\",\n \"namespace\": \"kube-system\",\n \"uid\": \"eacca28b-b161-4bcd-9c76-70e0b8dfc558\",\n \"resourceVersion\": \"524436\",\n \"creationTimestamp\": \"2021-05-25T10:49:59Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"etcd-v1.21-control-plane\",\n \"uid\": \"284bceacace85033c20ef9ba60cb1175\",\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.containers{etcd}\"\n },\n \"reason\": \"Unhealthy\",\n \"message\": \"Liveness probe failed: HTTP probe failed with statuscode: 503\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"v1.21-control-plane\"\n },\n \"firstTimestamp\": \"2021-05-24T23:09:58Z\",\n \"lastTimestamp\": \"2021-05-25T10:49:58Z\",\n \"count\": 4,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"kube-apiserver-v1.21-control-plane.16822373d3cf314d\",\n \"namespace\": \"kube-system\",\n \"uid\": \"52945692-77e0-4300-8fce-045dd6415756\",\n \"resourceVersion\": \"524457\",\n \"creationTimestamp\": \"2021-05-25T07:06:33Z\"\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"kube-apiserver-v1.21-control-plane\",\n \"uid\": \"5ab098758dbf0fc9b2d04c6559ca1256\",\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.containers{kube-apiserver}\"\n },\n \"reason\": \"Unhealthy\",\n \"message\": \"Readiness probe failed: HTTP probe failed with statuscode: 500\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"v1.21-control-plane\"\n },\n \"firstTimestamp\": \"2021-05-24T23:09:56Z\",\n \"lastTimestamp\": \"2021-05-25T10:50:10Z\",\n \"count\": 22,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n }\n ]\n}\n{\n \"kind\": \"ReplicationControllerList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"527719\"\n },\n \"items\": []\n}\n{\n \"kind\": \"ServiceList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"527720\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"kube-dns\",\n \"namespace\": \"kube-system\",\n \"uid\": \"58cdda53-1891-444f-873a-c9d3460a3fcd\",\n \"resourceVersion\": \"273\",\n \"creationTimestamp\": \"2021-05-24T17:23:57Z\",\n \"labels\": {\n \"k8s-app\": \"kube-dns\",\n \"kubernetes.io/cluster-service\": \"true\",\n \"kubernetes.io/name\": \"CoreDNS\"\n },\n \"annotations\": {\n \"prometheus.io/port\": \"9153\",\n \"prometheus.io/scrape\": \"true\"\n }\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"dns\",\n \"protocol\": \"UDP\",\n \"port\": 53,\n \"targetPort\": 53\n },\n {\n \"name\": \"dns-tcp\",\n \"protocol\": \"TCP\",\n \"port\": 53,\n \"targetPort\": 53\n },\n {\n \"name\": \"metrics\",\n \"protocol\": \"TCP\",\n \"port\": 9153,\n \"targetPort\": 9153\n }\n ],\n \"selector\": {\n \"k8s-app\": \"kube-dns\"\n },\n \"clusterIP\": \"10.96.0.10\",\n \"clusterIPs\": [\n \"10.96.0.10\"\n ],\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\",\n \"ipFamilies\": [\n \"IPv4\"\n ],\n \"ipFamilyPolicy\": \"SingleStack\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n }\n ]\n}\n{\n \"kind\": \"DaemonSetList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"resourceVersion\": \"527721\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"create-loop-devs\",\n \"namespace\": \"kube-system\",\n \"uid\": \"b79f343c-7028-4aa9-82ea-87569feda190\",\n \"resourceVersion\": \"344509\",\n \"generation\": 1,\n \"creationTimestamp\": \"2021-05-24T17:25:28Z\",\n \"labels\": {\n \"app\": \"create-loop-devs\"\n },\n \"annotations\": {\n \"deprecated.daemonset.template.generation\": \"1\",\n \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"create-loop-devs\\\"},\\\"name\\\":\\\"create-loop-devs\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"name\\\":\\\"create-loop-devs\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"name\\\":\\\"create-loop-devs\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"sh\\\",\\\"-c\\\",\\\"while true; do\\\\n for i in $(seq 0 1000); do\\\\n if ! [ -e /dev/loop$i ]; then\\\\n mknod /dev/loop$i b 7 $i\\\\n fi\\\\n done\\\\n sleep 100000000\\\\ndone\\\\n\\\"],\\\"image\\\":\\\"alpine:3.6\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"loopdev\\\",\\\"resources\\\":{},\\\"securityContext\\\":{\\\"privileged\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/dev\\\",\\\"name\\\":\\\"dev\\\"}]}],\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/dev\\\"},\\\"name\\\":\\\"dev\\\"}]}}}}\\n\"\n }\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"name\": \"create-loop-devs\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"name\": \"create-loop-devs\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"dev\",\n \"hostPath\": {\n \"path\": \"/dev\",\n \"type\": \"\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"loopdev\",\n \"image\": \"alpine:3.6\",\n \"command\": [\n \"sh\",\n \"-c\",\n \"while true; do\\n for i in $(seq 0 1000); do\\n if ! [ -e /dev/loop$i ]; then\\n mknod /dev/loop$i b 7 $i\\n fi\\n done\\n sleep 100000000\\ndone\\n\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"dev\",\n \"mountPath\": \"/dev\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ]\n }\n },\n \"updateStrategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1,\n \"maxSurge\": 0\n }\n },\n \"revisionHistoryLimit\": 10\n },\n \"status\": {\n \"currentNumberScheduled\": 3,\n \"numberMisscheduled\": 0,\n \"desiredNumberScheduled\": 3,\n \"numberReady\": 3,\n \"observedGeneration\": 1,\n \"updatedNumberScheduled\": 3,\n \"numberAvailable\": 3\n }\n },\n {\n \"metadata\": {\n \"name\": \"kindnet\",\n \"namespace\": \"kube-system\",\n \"uid\": \"7df9a59e-e01c-4b07-802a-0301a3e7e104\",\n \"resourceVersion\": \"615\",\n \"generation\": 1,\n \"creationTimestamp\": \"2021-05-24T17:23:59Z\",\n \"labels\": {\n \"app\": \"kindnet\",\n \"k8s-app\": \"kindnet\",\n \"tier\": \"node\"\n },\n \"annotations\": {\n \"deprecated.daemonset.template.generation\": \"1\"\n }\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"kindnet\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"kindnet\",\n \"k8s-app\": \"kindnet\",\n \"tier\": \"node\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni-cfg\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kindnet-cni\",\n \"image\": \"docker.io/kindest/kindnetd:v20210326-1e038dc5\",\n \"env\": [\n {\n \"name\": \"HOST_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.hostIP\"\n }\n }\n },\n {\n \"name\": \"POD_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.podIP\"\n }\n }\n },\n {\n \"name\": \"POD_SUBNET\",\n \"value\": \"10.244.0.0/16\"\n },\n {\n \"name\": \"CONTROL_PLANE_ENDPOINT\",\n \"value\": \"v1.21-control-plane:6443\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni-cfg\",\n \"mountPath\": \"/etc/cni/net.d\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_RAW\",\n \"NET_ADMIN\"\n ]\n },\n \"privileged\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"kindnet\",\n \"serviceAccount\": \"kindnet\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\"\n }\n ]\n }\n },\n \"updateStrategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1,\n \"maxSurge\": 0\n }\n },\n \"revisionHistoryLimit\": 10\n },\n \"status\": {\n \"currentNumberScheduled\": 3,\n \"numberMisscheduled\": 0,\n \"desiredNumberScheduled\": 3,\n \"numberReady\": 3,\n \"observedGeneration\": 1,\n \"updatedNumberScheduled\": 3,\n \"numberAvailable\": 3\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-multus-ds\",\n \"namespace\": \"kube-system\",\n \"uid\": \"35e3e040-e7ed-42ff-a1a7-0543ccdb251f\",\n \"resourceVersion\": \"344392\",\n \"generation\": 1,\n \"creationTimestamp\": \"2021-05-24T17:25:29Z\",\n \"labels\": {\n \"app\": \"multus\",\n \"name\": \"multus\",\n \"tier\": \"node\"\n },\n \"annotations\": {\n \"deprecated.daemonset.template.generation\": \"1\",\n \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"multus\\\",\\\"name\\\":\\\"multus\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kube-multus-ds\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"name\\\":\\\"multus\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"multus\\\",\\\"name\\\":\\\"multus\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"args\\\":[\\\"--multus-conf-file=auto\\\",\\\"--cni-version=0.3.1\\\"],\\\"command\\\":[\\\"/entrypoint.sh\\\"],\\\"image\\\":\\\"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\\\",\\\"name\\\":\\\"kube-multus\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"privileged\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"cni\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/tmp/multus-conf\\\",\\\"name\\\":\\\"multus-cfg\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"multus\\\",\\\"terminationGracePeriodSeconds\\\":10,\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\"},\\\"name\\\":\\\"cni\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/opt/cni/bin\\\"},\\\"name\\\":\\\"cnibin\\\"},{\\\"configMap\\\":{\\\"items\\\":[{\\\"key\\\":\\\"cni-conf.json\\\",\\\"path\\\":\\\"70-multus.conf\\\"}],\\\"name\\\":\\\"multus-cni-config\\\"},\\\"name\\\":\\\"multus-cfg\\\"}]}},\\\"updateStrategy\\\":{\\\"type\\\":\\\"RollingUpdate\\\"}}}\\n\"\n }\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"name\": \"multus\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"multus\",\n \"name\": \"multus\",\n \"tier\": \"node\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"cnibin\",\n \"hostPath\": {\n \"path\": \"/opt/cni/bin\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"multus-cfg\",\n \"configMap\": {\n \"name\": \"multus-cni-config\",\n \"items\": [\n {\n \"key\": \"cni-conf.json\",\n \"path\": \"70-multus.conf\"\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-multus\",\n \"image\": \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\",\n \"command\": [\n \"/entrypoint.sh\"\n ],\n \"args\": [\n \"--multus-conf-file=auto\",\n \"--cni-version=0.3.1\"\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni\",\n \"mountPath\": \"/host/etc/cni/net.d\"\n },\n {\n \"name\": \"cnibin\",\n \"mountPath\": \"/host/opt/cni/bin\"\n },\n {\n \"name\": \"multus-cfg\",\n \"mountPath\": \"/tmp/multus-conf\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 10,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"multus\",\n \"serviceAccount\": \"multus\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ]\n }\n },\n \"updateStrategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1,\n \"maxSurge\": 0\n }\n },\n \"revisionHistoryLimit\": 10\n },\n \"status\": {\n \"currentNumberScheduled\": 3,\n \"numberMisscheduled\": 0,\n \"desiredNumberScheduled\": 3,\n \"numberReady\": 3,\n \"observedGeneration\": 1,\n \"updatedNumberScheduled\": 3,\n \"numberAvailable\": 3\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy\",\n \"namespace\": \"kube-system\",\n \"uid\": \"dd5ce845-c418-41b5-959c-238f11c79573\",\n \"resourceVersion\": \"597\",\n \"generation\": 1,\n \"creationTimestamp\": \"2021-05-24T17:23:57Z\",\n \"labels\": {\n \"k8s-app\": \"kube-proxy\"\n },\n \"annotations\": {\n \"deprecated.daemonset.template.generation\": \"1\"\n }\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"kube-proxy\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"k8s-app\": \"kube-proxy\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kube-proxy\",\n \"configMap\": {\n \"name\": \"kube-proxy\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-proxy\",\n \"image\": \"k8s.gcr.io/kube-proxy:v1.21.1\",\n \"command\": [\n \"/usr/local/bin/kube-proxy\",\n \"--config=/var/lib/kube-proxy/config.conf\",\n \"--hostname-override=$(NODE_NAME)\"\n ],\n \"env\": [\n {\n \"name\": \"NODE_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n }\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"kube-proxy\",\n \"mountPath\": \"/var/lib/kube-proxy\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"kube-proxy\",\n \"serviceAccount\": \"kube-proxy\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\"\n }\n },\n \"updateStrategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1,\n \"maxSurge\": 0\n }\n },\n \"revisionHistoryLimit\": 10\n },\n \"status\": {\n \"currentNumberScheduled\": 3,\n \"numberMisscheduled\": 0,\n \"desiredNumberScheduled\": 3,\n \"numberReady\": 3,\n \"observedGeneration\": 1,\n \"updatedNumberScheduled\": 3,\n \"numberAvailable\": 3\n }\n },\n {\n \"metadata\": {\n \"name\": \"tune-sysctls\",\n \"namespace\": \"kube-system\",\n \"uid\": \"885fb56b-d400-4d89-8bca-fa8313ef8be9\",\n \"resourceVersion\": \"344309\",\n \"generation\": 1,\n \"creationTimestamp\": \"2021-05-24T17:25:28Z\",\n \"labels\": {\n \"app\": \"tune-sysctls\"\n },\n \"annotations\": {\n \"deprecated.daemonset.template.generation\": \"1\",\n \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"tune-sysctls\\\"},\\\"name\\\":\\\"tune-sysctls\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"name\\\":\\\"tune-sysctls\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"name\\\":\\\"tune-sysctls\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"sh\\\",\\\"-c\\\",\\\"while true; do\\\\n sysctl -w fs.inotify.max_user_watches=524288\\\\n sleep 10\\\\ndone\\\\n\\\"],\\\"image\\\":\\\"alpine:3.6\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"setsysctls\\\",\\\"resources\\\":{},\\\"securityContext\\\":{\\\"privileged\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/sys\\\",\\\"name\\\":\\\"sys\\\"}]}],\\\"hostIPC\\\":true,\\\"hostNetwork\\\":true,\\\"hostPID\\\":true,\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/sys\\\"},\\\"name\\\":\\\"sys\\\"}]}}}}\\n\"\n }\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"name\": \"tune-sysctls\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"name\": \"tune-sysctls\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"sys\",\n \"hostPath\": {\n \"path\": \"/sys\",\n \"type\": \"\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"setsysctls\",\n \"image\": \"alpine:3.6\",\n \"command\": [\n \"sh\",\n \"-c\",\n \"while true; do\\n sysctl -w fs.inotify.max_user_watches=524288\\n sleep 10\\ndone\\n\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"sys\",\n \"mountPath\": \"/sys\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"hostNetwork\": true,\n \"hostPID\": true,\n \"hostIPC\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ]\n }\n },\n \"updateStrategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1,\n \"maxSurge\": 0\n }\n },\n \"revisionHistoryLimit\": 10\n },\n \"status\": {\n \"currentNumberScheduled\": 3,\n \"numberMisscheduled\": 0,\n \"desiredNumberScheduled\": 3,\n \"numberReady\": 3,\n \"observedGeneration\": 1,\n \"updatedNumberScheduled\": 3,\n \"numberAvailable\": 3\n }\n }\n ]\n}\n{\n \"kind\": \"DeploymentList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"resourceVersion\": \"527721\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"coredns\",\n \"namespace\": \"kube-system\",\n \"uid\": \"8679a9a5-173b-46d6-9240-a9231da5fe77\",\n \"resourceVersion\": \"391262\",\n \"generation\": 1,\n \"creationTimestamp\": \"2021-05-24T17:23:57Z\",\n \"labels\": {\n \"k8s-app\": \"kube-dns\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/revision\": \"1\"\n }\n },\n \"spec\": {\n \"replicas\": 2,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"kube-dns\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"k8s-app\": \"kube-dns\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config-volume\",\n \"configMap\": {\n \"name\": \"coredns\",\n \"items\": [\n {\n \"key\": \"Corefile\",\n \"path\": \"Corefile\"\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"coredns\",\n \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.0\",\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"ports\": [\n {\n \"name\": \"dns\",\n \"containerPort\": 53,\n \"protocol\": \"UDP\"\n },\n {\n \"name\": \"dns-tcp\",\n \"containerPort\": 53,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"metrics\",\n \"containerPort\": 9153,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"memory\": \"170Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"70Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 60,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 5\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/ready\",\n \"port\": 8181,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_BIND_SERVICE\"\n ],\n \"drop\": [\n \"all\"\n ]\n },\n \"readOnlyRootFilesystem\": true,\n \"allowPrivilegeEscalation\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"Default\",\n \"nodeSelector\": {\n \"kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"coredns\",\n \"serviceAccount\": \"coredns\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node-role.kubernetes.io/control-plane\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n },\n \"strategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1,\n \"maxSurge\": \"25%\"\n }\n },\n \"revisionHistoryLimit\": 10,\n \"progressDeadlineSeconds\": 600\n },\n \"status\": {\n \"observedGeneration\": 1,\n \"replicas\": 2,\n \"updatedReplicas\": 2,\n \"readyReplicas\": 2,\n \"availableReplicas\": 2,\n \"conditions\": [\n {\n \"type\": \"Progressing\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2021-05-24T17:24:32Z\",\n \"lastTransitionTime\": \"2021-05-24T17:24:11Z\",\n \"reason\": \"NewReplicaSetAvailable\",\n \"message\": \"ReplicaSet \\\"coredns-558bd4d5db\\\" has successfully progressed.\"\n },\n {\n \"type\": \"Available\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2021-05-25T02:18:51Z\",\n \"lastTransitionTime\": \"2021-05-25T02:18:51Z\",\n \"reason\": \"MinimumReplicasAvailable\",\n \"message\": \"Deployment has minimum availability.\"\n }\n ]\n }\n }\n ]\n}\n{\n \"kind\": \"ReplicaSetList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"resourceVersion\": \"527721\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"coredns-558bd4d5db\",\n \"namespace\": \"kube-system\",\n \"uid\": \"1f2c885b-90d5-49f7-9f3d-51d243f562b5\",\n \"resourceVersion\": \"391261\",\n \"generation\": 1,\n \"creationTimestamp\": \"2021-05-24T17:24:11Z\",\n \"labels\": {\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"558bd4d5db\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/desired-replicas\": \"2\",\n \"deployment.kubernetes.io/max-replicas\": \"3\",\n \"deployment.kubernetes.io/revision\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"name\": \"coredns\",\n \"uid\": \"8679a9a5-173b-46d6-9240-a9231da5fe77\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"replicas\": 2,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"558bd4d5db\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"558bd4d5db\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config-volume\",\n \"configMap\": {\n \"name\": \"coredns\",\n \"items\": [\n {\n \"key\": \"Corefile\",\n \"path\": \"Corefile\"\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"coredns\",\n \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.0\",\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"ports\": [\n {\n \"name\": \"dns\",\n \"containerPort\": 53,\n \"protocol\": \"UDP\"\n },\n {\n \"name\": \"dns-tcp\",\n \"containerPort\": 53,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"metrics\",\n \"containerPort\": 9153,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"memory\": \"170Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"70Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 60,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 5\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/ready\",\n \"port\": 8181,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_BIND_SERVICE\"\n ],\n \"drop\": [\n \"all\"\n ]\n },\n \"readOnlyRootFilesystem\": true,\n \"allowPrivilegeEscalation\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"Default\",\n \"nodeSelector\": {\n \"kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"coredns\",\n \"serviceAccount\": \"coredns\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node-role.kubernetes.io/control-plane\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n }\n },\n \"status\": {\n \"replicas\": 2,\n \"fullyLabeledReplicas\": 2,\n \"readyReplicas\": 2,\n \"availableReplicas\": 2,\n \"observedGeneration\": 1\n }\n }\n ]\n}\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"resourceVersion\": \"527721\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"coredns-558bd4d5db-46k4j\",\n \"generateName\": \"coredns-558bd4d5db-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"b2a1669c-ec93-4e8e-9bd8-77d529d56777\",\n \"resourceVersion\": \"391253\",\n \"creationTimestamp\": \"2021-05-25T02:18:50Z\",\n \"labels\": {\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"558bd4d5db\"\n },\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.1.6\\\"\\n ],\\n \\\"mac\\\": \\\"c6:c2:31:f9:f3:4f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.1.6\\\"\\n ],\\n \\\"mac\\\": \\\"c6:c2:31:f9:f3:4f\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"coredns-558bd4d5db\",\n \"uid\": \"1f2c885b-90d5-49f7-9f3d-51d243f562b5\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config-volume\",\n \"configMap\": {\n \"name\": \"coredns\",\n \"items\": [\n {\n \"key\": \"Corefile\",\n \"path\": \"Corefile\"\n }\n ],\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"kube-api-access-hfbhj\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"coredns\",\n \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.0\",\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"ports\": [\n {\n \"name\": \"dns\",\n \"containerPort\": 53,\n \"protocol\": \"UDP\"\n },\n {\n \"name\": \"dns-tcp\",\n \"containerPort\": 53,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"metrics\",\n \"containerPort\": 9153,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"memory\": \"170Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"70Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns\"\n },\n {\n \"name\": \"kube-api-access-hfbhj\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 60,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 5\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/ready\",\n \"port\": 8181,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_BIND_SERVICE\"\n ],\n \"drop\": [\n \"all\"\n ]\n },\n \"readOnlyRootFilesystem\": true,\n \"allowPrivilegeEscalation\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"Default\",\n \"nodeSelector\": {\n \"kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"coredns\",\n \"serviceAccount\": \"coredns\",\n \"nodeName\": \"v1.21-worker\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node-role.kubernetes.io/control-plane\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:18:50Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:18:51Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:18:51Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:18:50Z\"\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"podIP\": \"10.244.1.6\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.6\"\n }\n ],\n \"startTime\": \"2021-05-25T02:18:50Z\",\n \"containerStatuses\": [\n {\n \"name\": \"coredns\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-25T02:18:51Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.0\",\n \"imageID\": \"sha256:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899\",\n \"containerID\": \"containerd://c58837fc314b8869bd2d0323dd40f5accbcf1e78a6401302be933f4b28c50173\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"coredns-558bd4d5db-kff7s\",\n \"generateName\": \"coredns-558bd4d5db-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"9a64f6be-9f12-4770-af92-ee6646455129\",\n \"resourceVersion\": \"391259\",\n \"creationTimestamp\": \"2021-05-25T02:18:50Z\",\n \"labels\": {\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"558bd4d5db\"\n },\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.1.7\\\"\\n ],\\n \\\"mac\\\": \\\"6a:87:10:a7:fc:80\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.1.7\\\"\\n ],\\n \\\"mac\\\": \\\"6a:87:10:a7:fc:80\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"coredns-558bd4d5db\",\n \"uid\": \"1f2c885b-90d5-49f7-9f3d-51d243f562b5\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config-volume\",\n \"configMap\": {\n \"name\": \"coredns\",\n \"items\": [\n {\n \"key\": \"Corefile\",\n \"path\": \"Corefile\"\n }\n ],\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"kube-api-access-9ggwn\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"coredns\",\n \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.0\",\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"ports\": [\n {\n \"name\": \"dns\",\n \"containerPort\": 53,\n \"protocol\": \"UDP\"\n },\n {\n \"name\": \"dns-tcp\",\n \"containerPort\": 53,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"metrics\",\n \"containerPort\": 9153,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"memory\": \"170Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"70Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns\"\n },\n {\n \"name\": \"kube-api-access-9ggwn\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 60,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 5\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/ready\",\n \"port\": 8181,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_BIND_SERVICE\"\n ],\n \"drop\": [\n \"all\"\n ]\n },\n \"readOnlyRootFilesystem\": true,\n \"allowPrivilegeEscalation\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"Default\",\n \"nodeSelector\": {\n \"kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"coredns\",\n \"serviceAccount\": \"coredns\",\n \"nodeName\": \"v1.21-worker\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node-role.kubernetes.io/control-plane\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:18:50Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:18:51Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:18:51Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:18:50Z\"\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"podIP\": \"10.244.1.7\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.7\"\n }\n ],\n \"startTime\": \"2021-05-25T02:18:50Z\",\n \"containerStatuses\": [\n {\n \"name\": \"coredns\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-25T02:18:51Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.0\",\n \"imageID\": \"sha256:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899\",\n \"containerID\": \"containerd://3bbd644761d952984787229a7a0f95893d0b9803976f044307a9e8ab057168e3\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"create-loop-devs-b8n7x\",\n \"generateName\": \"create-loop-devs-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"a4582dec-f874-4bb4-85b5-fbb02deb602a\",\n \"resourceVersion\": \"1021\",\n \"creationTimestamp\": \"2021-05-24T17:25:28Z\",\n \"labels\": {\n \"controller-revision-hash\": \"69d76dbff8\",\n \"name\": \"create-loop-devs\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"create-loop-devs\",\n \"uid\": \"b79f343c-7028-4aa9-82ea-87569feda190\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"dev\",\n \"hostPath\": {\n \"path\": \"/dev\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-api-access-j4b2h\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"loopdev\",\n \"image\": \"alpine:3.6\",\n \"command\": [\n \"sh\",\n \"-c\",\n \"while true; do\\n for i in $(seq 0 1000); do\\n if ! [ -e /dev/loop$i ]; then\\n mknod /dev/loop$i b 7 $i\\n fi\\n done\\n sleep 100000000\\ndone\\n\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"dev\",\n \"mountPath\": \"/dev\"\n },\n {\n \"name\": \"kube-api-access-j4b2h\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"default\",\n \"serviceAccount\": \"default\",\n \"nodeName\": \"v1.21-control-plane\",\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"v1.21-control-plane\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:28Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:32Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:32Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:28Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"10.244.0.5\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.0.5\"\n }\n ],\n \"startTime\": \"2021-05-24T17:25:28Z\",\n \"containerStatuses\": [\n {\n \"name\": \"loopdev\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-24T17:25:31Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/library/alpine:3.6\",\n \"imageID\": \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"containerID\": \"containerd://48be494048ab5327cd1615e8c5e7e801d521dd5fd91248b1004b5257cd536175\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"create-loop-devs-lfj6m\",\n \"generateName\": \"create-loop-devs-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"8d2744f2-252d-453c-82fc-914fd03a65b4\",\n \"resourceVersion\": \"1088\",\n \"creationTimestamp\": \"2021-05-24T17:25:28Z\",\n \"labels\": {\n \"controller-revision-hash\": \"69d76dbff8\",\n \"name\": \"create-loop-devs\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"create-loop-devs\",\n \"uid\": \"b79f343c-7028-4aa9-82ea-87569feda190\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"dev\",\n \"hostPath\": {\n \"path\": \"/dev\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-api-access-fgxjd\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"loopdev\",\n \"image\": \"alpine:3.6\",\n \"command\": [\n \"sh\",\n \"-c\",\n \"while true; do\\n for i in $(seq 0 1000); do\\n if ! [ -e /dev/loop$i ]; then\\n mknod /dev/loop$i b 7 $i\\n fi\\n done\\n sleep 100000000\\ndone\\n\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"dev\",\n \"mountPath\": \"/dev\"\n },\n {\n \"name\": \"kube-api-access-fgxjd\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"default\",\n \"serviceAccount\": \"default\",\n \"nodeName\": \"v1.21-worker2\",\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"v1.21-worker2\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:28Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:32Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:32Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:28Z\"\n }\n ],\n \"hostIP\": \"172.18.0.2\",\n \"podIP\": \"10.244.2.2\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.2\"\n }\n ],\n \"startTime\": \"2021-05-24T17:25:28Z\",\n \"containerStatuses\": [\n {\n \"name\": \"loopdev\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-24T17:25:31Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/library/alpine:3.6\",\n \"imageID\": \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"containerID\": \"containerd://221576c7e505bbf350798d52b0173a3b24160d7f4f8f2635a2c4eec2d22cc71b\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"create-loop-devs-zpb97\",\n \"generateName\": \"create-loop-devs-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"7efe7a44-c9b6-40e8-ab9e-bb3068ebb4ad\",\n \"resourceVersion\": \"344508\",\n \"creationTimestamp\": \"2021-05-25T02:04:35Z\",\n \"labels\": {\n \"controller-revision-hash\": \"69d76dbff8\",\n \"name\": \"create-loop-devs\",\n \"pod-template-generation\": \"1\"\n },\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.1.180\\\"\\n ],\\n \\\"mac\\\": \\\"0a:6b:7a:db:c5:ad\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.1.180\\\"\\n ],\\n \\\"mac\\\": \\\"0a:6b:7a:db:c5:ad\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"create-loop-devs\",\n \"uid\": \"b79f343c-7028-4aa9-82ea-87569feda190\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"dev\",\n \"hostPath\": {\n \"path\": \"/dev\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-api-access-2gxrz\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"loopdev\",\n \"image\": \"alpine:3.6\",\n \"command\": [\n \"sh\",\n \"-c\",\n \"while true; do\\n for i in $(seq 0 1000); do\\n if ! [ -e /dev/loop$i ]; then\\n mknod /dev/loop$i b 7 $i\\n fi\\n done\\n sleep 100000000\\ndone\\n\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"dev\",\n \"mountPath\": \"/dev\"\n },\n {\n \"name\": \"kube-api-access-2gxrz\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"default\",\n \"serviceAccount\": \"default\",\n \"nodeName\": \"v1.21-worker\",\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"v1.21-worker\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:04:35Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:04:36Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:04:36Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:04:35Z\"\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"podIP\": \"10.244.1.180\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.180\"\n }\n ],\n \"startTime\": \"2021-05-25T02:04:35Z\",\n \"containerStatuses\": [\n {\n \"name\": \"loopdev\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-25T02:04:35Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/library/alpine:3.6\",\n \"imageID\": \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"containerID\": \"containerd://85a58be158cb903f9dc4fc98cca673307e3b9f62725787b987a02ea1854f56a9\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"etcd-v1.21-control-plane\",\n \"namespace\": \"kube-system\",\n \"uid\": \"a6aee24f-32c8-4380-8654-37b74d1659f4\",\n \"resourceVersion\": \"502\",\n \"creationTimestamp\": \"2021-05-24T17:24:09Z\",\n \"labels\": {\n \"component\": \"etcd\",\n \"tier\": \"control-plane\"\n },\n \"annotations\": {\n \"kubeadm.kubernetes.io/etcd.advertise-client-urls\": \"https://172.18.0.3:2379\",\n \"kubernetes.io/config.hash\": \"284bceacace85033c20ef9ba60cb1175\",\n \"kubernetes.io/config.mirror\": \"284bceacace85033c20ef9ba60cb1175\",\n \"kubernetes.io/config.seen\": \"2021-05-24T17:24:02.739677139Z\",\n \"kubernetes.io/config.source\": \"file\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"v1\",\n \"kind\": \"Node\",\n \"name\": \"v1.21-control-plane\",\n \"uid\": \"c96de4b5-b5f9-4f91-bfc2-2115352cebf6\",\n \"controller\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"etcd-certs\",\n \"hostPath\": {\n \"path\": \"/etc/kubernetes/pki/etcd\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"etcd-data\",\n \"hostPath\": {\n \"path\": \"/var/lib/etcd\",\n \"type\": \"DirectoryOrCreate\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"etcd\",\n \"image\": \"k8s.gcr.io/etcd:3.4.13-0\",\n \"command\": [\n \"etcd\",\n \"--advertise-client-urls=https://172.18.0.3:2379\",\n \"--cert-file=/etc/kubernetes/pki/etcd/server.crt\",\n \"--client-cert-auth=true\",\n \"--data-dir=/var/lib/etcd\",\n \"--initial-advertise-peer-urls=https://172.18.0.3:2380\",\n \"--initial-cluster=v1.21-control-plane=https://172.18.0.3:2380\",\n \"--key-file=/etc/kubernetes/pki/etcd/server.key\",\n \"--listen-client-urls=https://127.0.0.1:2379,https://172.18.0.3:2379\",\n \"--listen-metrics-urls=http://127.0.0.1:2381\",\n \"--listen-peer-urls=https://172.18.0.3:2380\",\n \"--name=v1.21-control-plane\",\n \"--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt\",\n \"--peer-client-cert-auth=true\",\n \"--peer-key-file=/etc/kubernetes/pki/etcd/peer.key\",\n \"--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt\",\n \"--snapshot-count=10000\",\n \"--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt\"\n ],\n \"resources\": {\n \"requests\": {\n \"cpu\": \"100m\",\n \"ephemeral-storage\": \"100Mi\",\n \"memory\": \"100Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"etcd-data\",\n \"mountPath\": \"/var/lib/etcd\"\n },\n {\n \"name\": \"etcd-certs\",\n \"mountPath\": \"/etc/kubernetes/pki/etcd\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 2381,\n \"host\": \"127.0.0.1\",\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 8\n },\n \"startupProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 2381,\n \"host\": \"127.0.0.1\",\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 24\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeName\": \"v1.21-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:09Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:17Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:17Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:09Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"172.18.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.3\"\n }\n ],\n \"startTime\": \"2021-05-24T17:24:09Z\",\n \"containerStatuses\": [\n {\n \"name\": \"etcd\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-24T17:23:49Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/etcd:3.4.13-0\",\n \"imageID\": \"sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934\",\n \"containerID\": \"containerd://0647fbbf7d55499dd51ba7bc05f48dedb60faa06225b51949de79d3a6fb0709b\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-5xbgn\",\n \"generateName\": \"kindnet-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"83844fc4-1dd7-4f3e-b904-68fe175028bc\",\n \"resourceVersion\": \"614\",\n \"creationTimestamp\": \"2021-05-24T17:24:25Z\",\n \"labels\": {\n \"app\": \"kindnet\",\n \"controller-revision-hash\": \"69d97dc4d9\",\n \"k8s-app\": \"kindnet\",\n \"pod-template-generation\": \"1\",\n \"tier\": \"node\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kindnet\",\n \"uid\": \"7df9a59e-e01c-4b07-802a-0301a3e7e104\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni-cfg\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-api-access-bsdm4\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kindnet-cni\",\n \"image\": \"docker.io/kindest/kindnetd:v20210326-1e038dc5\",\n \"env\": [\n {\n \"name\": \"HOST_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.hostIP\"\n }\n }\n },\n {\n \"name\": \"POD_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.podIP\"\n }\n }\n },\n {\n \"name\": \"POD_SUBNET\",\n \"value\": \"10.244.0.0/16\"\n },\n {\n \"name\": \"CONTROL_PLANE_ENDPOINT\",\n \"value\": \"v1.21-control-plane:6443\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni-cfg\",\n \"mountPath\": \"/etc/cni/net.d\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kube-api-access-bsdm4\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_RAW\",\n \"NET_ADMIN\"\n ]\n },\n \"privileged\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"kindnet\",\n \"serviceAccount\": \"kindnet\",\n \"nodeName\": \"v1.21-worker2\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"v1.21-worker2\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:25Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:30Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:30Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:25Z\"\n }\n ],\n \"hostIP\": \"172.18.0.2\",\n \"podIP\": \"172.18.0.2\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.2\"\n }\n ],\n \"startTime\": \"2021-05-24T17:24:25Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kindnet-cni\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-24T17:24:30Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/kindest/kindnetd:v20210326-1e038dc5\",\n \"imageID\": \"sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb\",\n \"containerID\": \"containerd://6675dd8ca39256764aa9e6d43c0b4de87c660ae7acdf23db6ff5f5ec3adfaf65\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Guaranteed\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-64qsq\",\n \"generateName\": \"kindnet-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"39250d95-495b-4488-ad85-ac5ada398b2d\",\n \"resourceVersion\": \"609\",\n \"creationTimestamp\": \"2021-05-24T17:24:25Z\",\n \"labels\": {\n \"app\": \"kindnet\",\n \"controller-revision-hash\": \"69d97dc4d9\",\n \"k8s-app\": \"kindnet\",\n \"pod-template-generation\": \"1\",\n \"tier\": \"node\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kindnet\",\n \"uid\": \"7df9a59e-e01c-4b07-802a-0301a3e7e104\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni-cfg\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-api-access-6x2k6\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kindnet-cni\",\n \"image\": \"docker.io/kindest/kindnetd:v20210326-1e038dc5\",\n \"env\": [\n {\n \"name\": \"HOST_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.hostIP\"\n }\n }\n },\n {\n \"name\": \"POD_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.podIP\"\n }\n }\n },\n {\n \"name\": \"POD_SUBNET\",\n \"value\": \"10.244.0.0/16\"\n },\n {\n \"name\": \"CONTROL_PLANE_ENDPOINT\",\n \"value\": \"v1.21-control-plane:6443\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni-cfg\",\n \"mountPath\": \"/etc/cni/net.d\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kube-api-access-6x2k6\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_RAW\",\n \"NET_ADMIN\"\n ]\n },\n \"privileged\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"kindnet\",\n \"serviceAccount\": \"kindnet\",\n \"nodeName\": \"v1.21-worker\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"v1.21-worker\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:25Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:30Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:30Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:25Z\"\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"podIP\": \"172.18.0.4\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.4\"\n }\n ],\n \"startTime\": \"2021-05-24T17:24:25Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kindnet-cni\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-24T17:24:29Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/kindest/kindnetd:v20210326-1e038dc5\",\n \"imageID\": \"sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb\",\n \"containerID\": \"containerd://f258d0362dacdb729f80cfadb0feb10fa0fbaf5fe924b8f9bf87acaa2bff99d5\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Guaranteed\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-x82hf\",\n \"generateName\": \"kindnet-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"12507833-a432-49f0-a676-f12ea64ed1d5\",\n \"resourceVersion\": \"497\",\n \"creationTimestamp\": \"2021-05-24T17:24:11Z\",\n \"labels\": {\n \"app\": \"kindnet\",\n \"controller-revision-hash\": \"69d97dc4d9\",\n \"k8s-app\": \"kindnet\",\n \"pod-template-generation\": \"1\",\n \"tier\": \"node\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kindnet\",\n \"uid\": \"7df9a59e-e01c-4b07-802a-0301a3e7e104\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni-cfg\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-api-access-h5m6x\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kindnet-cni\",\n \"image\": \"docker.io/kindest/kindnetd:v20210326-1e038dc5\",\n \"env\": [\n {\n \"name\": \"HOST_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.hostIP\"\n }\n }\n },\n {\n \"name\": \"POD_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.podIP\"\n }\n }\n },\n {\n \"name\": \"POD_SUBNET\",\n \"value\": \"10.244.0.0/16\"\n },\n {\n \"name\": \"CONTROL_PLANE_ENDPOINT\",\n \"value\": \"v1.21-control-plane:6443\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni-cfg\",\n \"mountPath\": \"/etc/cni/net.d\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kube-api-access-h5m6x\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_RAW\",\n \"NET_ADMIN\"\n ]\n },\n \"privileged\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"kindnet\",\n \"serviceAccount\": \"kindnet\",\n \"nodeName\": \"v1.21-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"v1.21-control-plane\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:11Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:16Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:16Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:11Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"172.18.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.3\"\n }\n ],\n \"startTime\": \"2021-05-24T17:24:11Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kindnet-cni\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-24T17:24:15Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/kindest/kindnetd:v20210326-1e038dc5\",\n \"imageID\": \"sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb\",\n \"containerID\": \"containerd://29fca3cb4fc90522b10afbc8fd8d9973769b39683d8e0fd381ac05da69a66570\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Guaranteed\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-apiserver-v1.21-control-plane\",\n \"namespace\": \"kube-system\",\n \"uid\": \"4b2a100e-537d-4cb3-9b25-08618589e3dc\",\n \"resourceVersion\": \"477994\",\n \"creationTimestamp\": \"2021-05-24T17:24:09Z\",\n \"labels\": {\n \"component\": \"kube-apiserver\",\n \"tier\": \"control-plane\"\n },\n \"annotations\": {\n \"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\": \"172.18.0.3:6443\",\n \"kubernetes.io/config.hash\": \"5ab098758dbf0fc9b2d04c6559ca1256\",\n \"kubernetes.io/config.mirror\": \"5ab098758dbf0fc9b2d04c6559ca1256\",\n \"kubernetes.io/config.seen\": \"2021-05-24T17:24:02.739680583Z\",\n \"kubernetes.io/config.source\": \"file\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"v1\",\n \"kind\": \"Node\",\n \"name\": \"v1.21-control-plane\",\n \"uid\": \"c96de4b5-b5f9-4f91-bfc2-2115352cebf6\",\n \"controller\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"ca-certs\",\n \"hostPath\": {\n \"path\": \"/etc/ssl/certs\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"etc-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/etc/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"k8s-certs\",\n \"hostPath\": {\n \"path\": \"/etc/kubernetes/pki\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"usr-local-share-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/usr/local/share/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"usr-share-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/usr/share/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-apiserver\",\n \"image\": \"k8s.gcr.io/kube-apiserver:v1.21.1\",\n \"command\": [\n \"kube-apiserver\",\n \"--advertise-address=172.18.0.3\",\n \"--allow-privileged=true\",\n \"--authorization-mode=Node,RBAC\",\n \"--client-ca-file=/etc/kubernetes/pki/ca.crt\",\n \"--enable-admission-plugins=NodeRestriction\",\n \"--enable-bootstrap-token-auth=true\",\n \"--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt\",\n \"--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt\",\n \"--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key\",\n \"--etcd-servers=https://127.0.0.1:2379\",\n \"--insecure-port=0\",\n \"--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt\",\n \"--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key\",\n \"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname\",\n \"--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt\",\n \"--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key\",\n \"--requestheader-allowed-names=front-proxy-client\",\n \"--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt\",\n \"--requestheader-extra-headers-prefix=X-Remote-Extra-\",\n \"--requestheader-group-headers=X-Remote-Group\",\n \"--requestheader-username-headers=X-Remote-User\",\n \"--runtime-config=\",\n \"--secure-port=6443\",\n \"--service-account-issuer=https://kubernetes.default.svc.cluster.local\",\n \"--service-account-key-file=/etc/kubernetes/pki/sa.pub\",\n \"--service-account-signing-key-file=/etc/kubernetes/pki/sa.key\",\n \"--service-cluster-ip-range=10.96.0.0/16\",\n \"--tls-cert-file=/etc/kubernetes/pki/apiserver.crt\",\n \"--tls-private-key-file=/etc/kubernetes/pki/apiserver.key\"\n ],\n \"resources\": {\n \"requests\": {\n \"cpu\": \"250m\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"ca-certs\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/ssl/certs\"\n },\n {\n \"name\": \"etc-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/ca-certificates\"\n },\n {\n \"name\": \"k8s-certs\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/kubernetes/pki\"\n },\n {\n \"name\": \"usr-local-share-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/usr/local/share/ca-certificates\"\n },\n {\n \"name\": \"usr-share-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/usr/share/ca-certificates\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/livez\",\n \"port\": 6443,\n \"host\": \"172.18.0.3\",\n \"scheme\": \"HTTPS\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 8\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/readyz\",\n \"port\": 6443,\n \"host\": \"172.18.0.3\",\n \"scheme\": \"HTTPS\"\n },\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 1,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"startupProbe\": {\n \"httpGet\": {\n \"path\": \"/livez\",\n \"port\": 6443,\n \"host\": \"172.18.0.3\",\n \"scheme\": \"HTTPS\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 24\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeName\": \"v1.21-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:09Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T09:00:47Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T09:00:47Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:09Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"172.18.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.3\"\n }\n ],\n \"startTime\": \"2021-05-24T17:24:09Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-apiserver\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-24T17:23:47Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/kube-apiserver:v1.21.1\",\n \"imageID\": \"sha256:6401e478dcc01ee6bd9969b9a3d88effc390f1f00c00a226663ee7f591691a1a\",\n \"containerID\": \"containerd://6fba178f592ff821d8dd8ebd5582828d5f7846909ebece1e50d4df9083fefaa2\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-controller-manager-v1.21-control-plane\",\n \"namespace\": \"kube-system\",\n \"uid\": \"9a2b3aa2-764e-4c66-bbe5-51c8907d7a9b\",\n \"resourceVersion\": \"461706\",\n \"creationTimestamp\": \"2021-05-24T17:24:09Z\",\n \"labels\": {\n \"component\": \"kube-controller-manager\",\n \"tier\": \"control-plane\"\n },\n \"annotations\": {\n \"kubernetes.io/config.hash\": \"db2c03b7796a7fad7946b4f7786359a4\",\n \"kubernetes.io/config.mirror\": \"db2c03b7796a7fad7946b4f7786359a4\",\n \"kubernetes.io/config.seen\": \"2021-05-24T17:24:02.739683256Z\",\n \"kubernetes.io/config.source\": \"file\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"v1\",\n \"kind\": \"Node\",\n \"name\": \"v1.21-control-plane\",\n \"uid\": \"c96de4b5-b5f9-4f91-bfc2-2115352cebf6\",\n \"controller\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"ca-certs\",\n \"hostPath\": {\n \"path\": \"/etc/ssl/certs\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"etc-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/etc/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"flexvolume-dir\",\n \"hostPath\": {\n \"path\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"k8s-certs\",\n \"hostPath\": {\n \"path\": \"/etc/kubernetes/pki\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"kubeconfig\",\n \"hostPath\": {\n \"path\": \"/etc/kubernetes/controller-manager.conf\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"usr-local-share-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/usr/local/share/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"usr-share-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/usr/share/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-controller-manager\",\n \"image\": \"k8s.gcr.io/kube-controller-manager:v1.21.1\",\n \"command\": [\n \"kube-controller-manager\",\n \"--allocate-node-cidrs=true\",\n \"--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf\",\n \"--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf\",\n \"--bind-address=127.0.0.1\",\n \"--client-ca-file=/etc/kubernetes/pki/ca.crt\",\n \"--cluster-cidr=10.244.0.0/16\",\n \"--cluster-name=v1.21\",\n \"--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt\",\n \"--cluster-signing-key-file=/etc/kubernetes/pki/ca.key\",\n \"--controllers=*,bootstrapsigner,tokencleaner\",\n \"--enable-hostpath-provisioner=true\",\n \"--kubeconfig=/etc/kubernetes/controller-manager.conf\",\n \"--leader-elect=true\",\n \"--port=0\",\n \"--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt\",\n \"--root-ca-file=/etc/kubernetes/pki/ca.crt\",\n \"--service-account-private-key-file=/etc/kubernetes/pki/sa.key\",\n \"--service-cluster-ip-range=10.96.0.0/16\",\n \"--use-service-account-credentials=true\"\n ],\n \"resources\": {\n \"requests\": {\n \"cpu\": \"200m\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"ca-certs\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/ssl/certs\"\n },\n {\n \"name\": \"etc-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/ca-certificates\"\n },\n {\n \"name\": \"flexvolume-dir\",\n \"mountPath\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\"\n },\n {\n \"name\": \"k8s-certs\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/kubernetes/pki\"\n },\n {\n \"name\": \"kubeconfig\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/kubernetes/controller-manager.conf\"\n },\n {\n \"name\": \"usr-local-share-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/usr/local/share/ca-certificates\"\n },\n {\n \"name\": \"usr-share-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/usr/share/ca-certificates\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 10257,\n \"host\": \"127.0.0.1\",\n \"scheme\": \"HTTPS\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 8\n },\n \"startupProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 10257,\n \"host\": \"127.0.0.1\",\n \"scheme\": \"HTTPS\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 24\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeName\": \"v1.21-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:09Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T07:06:52Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T07:06:52Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:09Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"172.18.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.3\"\n }\n ],\n \"startTime\": \"2021-05-24T17:24:09Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-controller-manager\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-25T07:06:42Z\"\n }\n },\n \"lastState\": {\n \"terminated\": {\n \"exitCode\": 255,\n \"reason\": \"Error\",\n \"startedAt\": \"2021-05-24T17:23:47Z\",\n \"finishedAt\": \"2021-05-25T07:06:36Z\",\n \"containerID\": \"containerd://ee3a9793b80a7629287e718f2d24050b6458702a8bfca827422d97138f2c48e1\"\n }\n },\n \"ready\": true,\n \"restartCount\": 1,\n \"image\": \"k8s.gcr.io/kube-controller-manager:v1.21.1\",\n \"imageID\": \"sha256:d0d10a483067aa39ee49cf08d0f61f7a6acbdd8298d81a08ac03351a1359ed95\",\n \"containerID\": \"containerd://cfa80904c9decc1c1dcc4278505e62ff28f7cc8b81d1a579c05a3a0f17d74bbe\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-multus-ds-chmxd\",\n \"generateName\": \"kube-multus-ds-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"c67a000f-fce5-4817-92c0-a87ecb34af56\",\n \"resourceVersion\": \"1231\",\n \"creationTimestamp\": \"2021-05-24T17:25:29Z\",\n \"labels\": {\n \"app\": \"multus\",\n \"controller-revision-hash\": \"97f447c9f\",\n \"name\": \"multus\",\n \"pod-template-generation\": \"1\",\n \"tier\": \"node\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-multus-ds\",\n \"uid\": \"35e3e040-e7ed-42ff-a1a7-0543ccdb251f\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"cnibin\",\n \"hostPath\": {\n \"path\": \"/opt/cni/bin\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"multus-cfg\",\n \"configMap\": {\n \"name\": \"multus-cni-config\",\n \"items\": [\n {\n \"key\": \"cni-conf.json\",\n \"path\": \"70-multus.conf\"\n }\n ],\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"kube-api-access-9882k\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-multus\",\n \"image\": \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\",\n \"command\": [\n \"/entrypoint.sh\"\n ],\n \"args\": [\n \"--multus-conf-file=auto\",\n \"--cni-version=0.3.1\"\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni\",\n \"mountPath\": \"/host/etc/cni/net.d\"\n },\n {\n \"name\": \"cnibin\",\n \"mountPath\": \"/host/opt/cni/bin\"\n },\n {\n \"name\": \"multus-cfg\",\n \"mountPath\": \"/tmp/multus-conf\"\n },\n {\n \"name\": \"kube-api-access-9882k\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 10,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"multus\",\n \"serviceAccount\": \"multus\",\n \"nodeName\": \"v1.21-worker2\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"v1.21-worker2\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:29Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:57Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:57Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:29Z\"\n }\n ],\n \"hostIP\": \"172.18.0.2\",\n \"podIP\": \"172.18.0.2\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.2\"\n }\n ],\n \"startTime\": \"2021-05-24T17:25:29Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-multus\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-24T17:25:57Z\"\n }\n },\n \"lastState\": {\n \"terminated\": {\n \"exitCode\": 1,\n \"reason\": \"Error\",\n \"startedAt\": \"2021-05-24T17:25:54Z\",\n \"finishedAt\": \"2021-05-24T17:25:56Z\",\n \"containerID\": \"containerd://2efd6518eeae1a4c848a8a9abb3d2491ce5aa46bfb7bbc854f3f7175b94e8ffb\"\n }\n },\n \"ready\": true,\n \"restartCount\": 1,\n \"image\": \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\",\n \"imageID\": \"ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed\",\n \"containerID\": \"containerd://66100b93c39b5710a836bcc13b40d302ee098ce4d69372d8821b00ee9ec116cf\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Guaranteed\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-multus-ds-fnq4h\",\n \"generateName\": \"kube-multus-ds-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"ae8e0f33-67a6-46b4-8d07-c71df3275961\",\n \"resourceVersion\": \"344391\",\n \"creationTimestamp\": \"2021-05-25T02:04:15Z\",\n \"labels\": {\n \"app\": \"multus\",\n \"controller-revision-hash\": \"97f447c9f\",\n \"name\": \"multus\",\n \"pod-template-generation\": \"1\",\n \"tier\": \"node\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-multus-ds\",\n \"uid\": \"35e3e040-e7ed-42ff-a1a7-0543ccdb251f\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"cnibin\",\n \"hostPath\": {\n \"path\": \"/opt/cni/bin\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"multus-cfg\",\n \"configMap\": {\n \"name\": \"multus-cni-config\",\n \"items\": [\n {\n \"key\": \"cni-conf.json\",\n \"path\": \"70-multus.conf\"\n }\n ],\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"kube-api-access-bdsf8\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-multus\",\n \"image\": \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\",\n \"command\": [\n \"/entrypoint.sh\"\n ],\n \"args\": [\n \"--multus-conf-file=auto\",\n \"--cni-version=0.3.1\"\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni\",\n \"mountPath\": \"/host/etc/cni/net.d\"\n },\n {\n \"name\": \"cnibin\",\n \"mountPath\": \"/host/opt/cni/bin\"\n },\n {\n \"name\": \"multus-cfg\",\n \"mountPath\": \"/tmp/multus-conf\"\n },\n {\n \"name\": \"kube-api-access-bdsf8\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 10,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"multus\",\n \"serviceAccount\": \"multus\",\n \"nodeName\": \"v1.21-worker\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"v1.21-worker\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:04:15Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:04:17Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:04:17Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:04:15Z\"\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"podIP\": \"172.18.0.4\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.4\"\n }\n ],\n \"startTime\": \"2021-05-25T02:04:15Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-multus\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-25T02:04:16Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\",\n \"imageID\": \"ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed\",\n \"containerID\": \"containerd://3a6b8433ece1829290106888d8377fa9249309a11cb713be5e2246bb205ffe42\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Guaranteed\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-multus-ds-w7mzq\",\n \"generateName\": \"kube-multus-ds-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"6e541b78-5666-45e5-aa89-76892cf0c588\",\n \"resourceVersion\": \"1355\",\n \"creationTimestamp\": \"2021-05-24T17:25:29Z\",\n \"labels\": {\n \"app\": \"multus\",\n \"controller-revision-hash\": \"97f447c9f\",\n \"name\": \"multus\",\n \"pod-template-generation\": \"1\",\n \"tier\": \"node\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-multus-ds\",\n \"uid\": \"35e3e040-e7ed-42ff-a1a7-0543ccdb251f\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"cnibin\",\n \"hostPath\": {\n \"path\": \"/opt/cni/bin\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"multus-cfg\",\n \"configMap\": {\n \"name\": \"multus-cni-config\",\n \"items\": [\n {\n \"key\": \"cni-conf.json\",\n \"path\": \"70-multus.conf\"\n }\n ],\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"kube-api-access-9vzzk\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-multus\",\n \"image\": \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\",\n \"command\": [\n \"/entrypoint.sh\"\n ],\n \"args\": [\n \"--multus-conf-file=auto\",\n \"--cni-version=0.3.1\"\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni\",\n \"mountPath\": \"/host/etc/cni/net.d\"\n },\n {\n \"name\": \"cnibin\",\n \"mountPath\": \"/host/opt/cni/bin\"\n },\n {\n \"name\": \"multus-cfg\",\n \"mountPath\": \"/tmp/multus-conf\"\n },\n {\n \"name\": \"kube-api-access-9vzzk\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 10,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"multus\",\n \"serviceAccount\": \"multus\",\n \"nodeName\": \"v1.21-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"v1.21-control-plane\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:29Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:26:16Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:26:16Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:29Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"172.18.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.3\"\n }\n ],\n \"startTime\": \"2021-05-24T17:25:29Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-multus\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-24T17:26:15Z\"\n }\n },\n \"lastState\": {\n \"terminated\": {\n \"exitCode\": 1,\n \"reason\": \"Error\",\n \"startedAt\": \"2021-05-24T17:25:58Z\",\n \"finishedAt\": \"2021-05-24T17:25:59Z\",\n \"containerID\": \"containerd://bd394dd458129759776d99df82534a7b1738ff6a7dbca0c558b9e4e57bf552f3\"\n }\n },\n \"ready\": true,\n \"restartCount\": 2,\n \"image\": \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\",\n \"imageID\": \"ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed\",\n \"containerID\": \"containerd://abfdda4833bccf436e87db8124a73a9df462ae6b1cbfcd1103740114eef9cbbc\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Guaranteed\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-c2smh\",\n \"generateName\": \"kube-proxy-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"551962aa-4b20-42a6-9d85-2f04001cc627\",\n \"resourceVersion\": \"489\",\n \"creationTimestamp\": \"2021-05-24T17:24:11Z\",\n \"labels\": {\n \"controller-revision-hash\": \"6bc6858f58\",\n \"k8s-app\": \"kube-proxy\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-proxy\",\n \"uid\": \"dd5ce845-c418-41b5-959c-238f11c79573\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kube-proxy\",\n \"configMap\": {\n \"name\": \"kube-proxy\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-api-access-db6tv\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-proxy\",\n \"image\": \"k8s.gcr.io/kube-proxy:v1.21.1\",\n \"command\": [\n \"/usr/local/bin/kube-proxy\",\n \"--config=/var/lib/kube-proxy/config.conf\",\n \"--hostname-override=$(NODE_NAME)\"\n ],\n \"env\": [\n {\n \"name\": \"NODE_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n }\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"kube-proxy\",\n \"mountPath\": \"/var/lib/kube-proxy\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kube-api-access-db6tv\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"kube-proxy\",\n \"serviceAccount\": \"kube-proxy\",\n \"nodeName\": \"v1.21-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"v1.21-control-plane\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:11Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:14Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:14Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:11Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"172.18.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.3\"\n }\n ],\n \"startTime\": \"2021-05-24T17:24:11Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-proxy\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-24T17:24:13Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/kube-proxy:v1.21.1\",\n \"imageID\": \"sha256:ebd41ad8710f9766775f044b3a341b06d47e8d471719f998c97ab509deb4f8ad\",\n \"containerID\": \"containerd://c199b3dca5034c172aa8d03b1933766ba7b5092a6102c6f513336ed958da23c6\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-pjm2c\",\n \"generateName\": \"kube-proxy-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"60bbf081-054b-4cbf-a0a1-f4b4a5992622\",\n \"resourceVersion\": \"584\",\n \"creationTimestamp\": \"2021-05-24T17:24:25Z\",\n \"labels\": {\n \"controller-revision-hash\": \"6bc6858f58\",\n \"k8s-app\": \"kube-proxy\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-proxy\",\n \"uid\": \"dd5ce845-c418-41b5-959c-238f11c79573\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kube-proxy\",\n \"configMap\": {\n \"name\": \"kube-proxy\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-api-access-fkqkd\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-proxy\",\n \"image\": \"k8s.gcr.io/kube-proxy:v1.21.1\",\n \"command\": [\n \"/usr/local/bin/kube-proxy\",\n \"--config=/var/lib/kube-proxy/config.conf\",\n \"--hostname-override=$(NODE_NAME)\"\n ],\n \"env\": [\n {\n \"name\": \"NODE_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n }\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"kube-proxy\",\n \"mountPath\": \"/var/lib/kube-proxy\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kube-api-access-fkqkd\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"kube-proxy\",\n \"serviceAccount\": \"kube-proxy\",\n \"nodeName\": \"v1.21-worker\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"v1.21-worker\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:25Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:28Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:28Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:25Z\"\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"podIP\": \"172.18.0.4\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.4\"\n }\n ],\n \"startTime\": \"2021-05-24T17:24:25Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-proxy\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-24T17:24:27Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/kube-proxy:v1.21.1\",\n \"imageID\": \"sha256:ebd41ad8710f9766775f044b3a341b06d47e8d471719f998c97ab509deb4f8ad\",\n \"containerID\": \"containerd://a0e24f0e44a0ef86a6b8db6669762da33df27f2094b005d64aad5687f074b60f\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-wg4wq\",\n \"generateName\": \"kube-proxy-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"82e74192-c1d6-4cec-ab93-0fb208cb345a\",\n \"resourceVersion\": \"596\",\n \"creationTimestamp\": \"2021-05-24T17:24:25Z\",\n \"labels\": {\n \"controller-revision-hash\": \"6bc6858f58\",\n \"k8s-app\": \"kube-proxy\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-proxy\",\n \"uid\": \"dd5ce845-c418-41b5-959c-238f11c79573\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kube-proxy\",\n \"configMap\": {\n \"name\": \"kube-proxy\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-api-access-jhs9z\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-proxy\",\n \"image\": \"k8s.gcr.io/kube-proxy:v1.21.1\",\n \"command\": [\n \"/usr/local/bin/kube-proxy\",\n \"--config=/var/lib/kube-proxy/config.conf\",\n \"--hostname-override=$(NODE_NAME)\"\n ],\n \"env\": [\n {\n \"name\": \"NODE_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n }\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"kube-proxy\",\n \"mountPath\": \"/var/lib/kube-proxy\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kube-api-access-jhs9z\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"kube-proxy\",\n \"serviceAccount\": \"kube-proxy\",\n \"nodeName\": \"v1.21-worker2\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"v1.21-worker2\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:25Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:29Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:29Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:25Z\"\n }\n ],\n \"hostIP\": \"172.18.0.2\",\n \"podIP\": \"172.18.0.2\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.2\"\n }\n ],\n \"startTime\": \"2021-05-24T17:24:25Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-proxy\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-24T17:24:28Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/kube-proxy:v1.21.1\",\n \"imageID\": \"sha256:ebd41ad8710f9766775f044b3a341b06d47e8d471719f998c97ab509deb4f8ad\",\n \"containerID\": \"containerd://e1e3aedbff5b183e49b933fba5fbcf7975415e94e9dc7f159a406a7781617825\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-scheduler-v1.21-control-plane\",\n \"namespace\": \"kube-system\",\n \"uid\": \"ac6eef47-23ac-4325-80bb-d6ab06594770\",\n \"resourceVersion\": \"461708\",\n \"creationTimestamp\": \"2021-05-24T17:24:09Z\",\n \"labels\": {\n \"component\": \"kube-scheduler\",\n \"tier\": \"control-plane\"\n },\n \"annotations\": {\n \"kubernetes.io/config.hash\": \"64b4facb669b8c69dc35b6bbbc0bbd5d\",\n \"kubernetes.io/config.mirror\": \"64b4facb669b8c69dc35b6bbbc0bbd5d\",\n \"kubernetes.io/config.seen\": \"2021-05-24T17:24:02.739640077Z\",\n \"kubernetes.io/config.source\": \"file\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"v1\",\n \"kind\": \"Node\",\n \"name\": \"v1.21-control-plane\",\n \"uid\": \"c96de4b5-b5f9-4f91-bfc2-2115352cebf6\",\n \"controller\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kubeconfig\",\n \"hostPath\": {\n \"path\": \"/etc/kubernetes/scheduler.conf\",\n \"type\": \"FileOrCreate\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-scheduler\",\n \"image\": \"k8s.gcr.io/kube-scheduler:v1.21.1\",\n \"command\": [\n \"kube-scheduler\",\n \"--authentication-kubeconfig=/etc/kubernetes/scheduler.conf\",\n \"--authorization-kubeconfig=/etc/kubernetes/scheduler.conf\",\n \"--bind-address=127.0.0.1\",\n \"--kubeconfig=/etc/kubernetes/scheduler.conf\",\n \"--leader-elect=true\",\n \"--port=0\"\n ],\n \"resources\": {\n \"requests\": {\n \"cpu\": \"100m\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"kubeconfig\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/kubernetes/scheduler.conf\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 10259,\n \"host\": \"127.0.0.1\",\n \"scheme\": \"HTTPS\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 8\n },\n \"startupProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 10259,\n \"host\": \"127.0.0.1\",\n \"scheme\": \"HTTPS\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 24\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeName\": \"v1.21-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:09Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T07:06:53Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T07:06:53Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:24:09Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"172.18.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.3\"\n }\n ],\n \"startTime\": \"2021-05-24T17:24:09Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-scheduler\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-25T07:06:42Z\"\n }\n },\n \"lastState\": {\n \"terminated\": {\n \"exitCode\": 255,\n \"reason\": \"Error\",\n \"startedAt\": \"2021-05-24T17:23:47Z\",\n \"finishedAt\": \"2021-05-25T07:06:36Z\",\n \"containerID\": \"containerd://d594c5074251250a9b87d806da103c4513480c53b45b0fc415a2abba8c9aa6e1\"\n }\n },\n \"ready\": true,\n \"restartCount\": 1,\n \"image\": \"k8s.gcr.io/kube-scheduler:v1.21.1\",\n \"imageID\": \"sha256:7813cf876a0d4e8176fa5106c0142ecedba78375e815915564a03d2bf0e1361f\",\n \"containerID\": \"containerd://e73e315a7d7f40cb6d3fe09753df135efebf0fe44a5679d8c3d95b08d28c8432\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"tune-sysctls-4ntcs\",\n \"generateName\": \"tune-sysctls-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"55bc8cb9-f287-477e-b693-fb326c608851\",\n \"resourceVersion\": \"344308\",\n \"creationTimestamp\": \"2021-05-25T02:03:55Z\",\n \"labels\": {\n \"controller-revision-hash\": \"7b545968fb\",\n \"name\": \"tune-sysctls\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"tune-sysctls\",\n \"uid\": \"885fb56b-d400-4d89-8bca-fa8313ef8be9\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"sys\",\n \"hostPath\": {\n \"path\": \"/sys\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-api-access-qhnr4\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"setsysctls\",\n \"image\": \"alpine:3.6\",\n \"command\": [\n \"sh\",\n \"-c\",\n \"while true; do\\n sysctl -w fs.inotify.max_user_watches=524288\\n sleep 10\\ndone\\n\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"sys\",\n \"mountPath\": \"/sys\"\n },\n {\n \"name\": \"kube-api-access-qhnr4\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"default\",\n \"serviceAccount\": \"default\",\n \"nodeName\": \"v1.21-worker\",\n \"hostNetwork\": true,\n \"hostPID\": true,\n \"hostIPC\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"v1.21-worker\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:03:55Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:03:57Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:03:57Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-25T02:03:55Z\"\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"podIP\": \"172.18.0.4\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.4\"\n }\n ],\n \"startTime\": \"2021-05-25T02:03:55Z\",\n \"containerStatuses\": [\n {\n \"name\": \"setsysctls\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-25T02:03:56Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/library/alpine:3.6\",\n \"imageID\": \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"containerID\": \"containerd://199cd422d8ca2d252fec60ec596cda1a0e5de51574250e2de39e9118f9e0f989\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"tune-sysctls-b7rgm\",\n \"generateName\": \"tune-sysctls-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"f32351a1-5c78-4120-8a84-ee198e60a14a\",\n \"resourceVersion\": \"1028\",\n \"creationTimestamp\": \"2021-05-24T17:25:28Z\",\n \"labels\": {\n \"controller-revision-hash\": \"7b545968fb\",\n \"name\": \"tune-sysctls\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"tune-sysctls\",\n \"uid\": \"885fb56b-d400-4d89-8bca-fa8313ef8be9\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"sys\",\n \"hostPath\": {\n \"path\": \"/sys\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-api-access-mswzj\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"setsysctls\",\n \"image\": \"alpine:3.6\",\n \"command\": [\n \"sh\",\n \"-c\",\n \"while true; do\\n sysctl -w fs.inotify.max_user_watches=524288\\n sleep 10\\ndone\\n\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"sys\",\n \"mountPath\": \"/sys\"\n },\n {\n \"name\": \"kube-api-access-mswzj\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"default\",\n \"serviceAccount\": \"default\",\n \"nodeName\": \"v1.21-worker2\",\n \"hostNetwork\": true,\n \"hostPID\": true,\n \"hostIPC\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"v1.21-worker2\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:28Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:32Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:32Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:28Z\"\n }\n ],\n \"hostIP\": \"172.18.0.2\",\n \"podIP\": \"172.18.0.2\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.2\"\n }\n ],\n \"startTime\": \"2021-05-24T17:25:28Z\",\n \"containerStatuses\": [\n {\n \"name\": \"setsysctls\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-24T17:25:32Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/library/alpine:3.6\",\n \"imageID\": \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"containerID\": \"containerd://979f793c50cea858555185610f0bd0180b22ee7178c780df7d978d9843b544e1\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"tune-sysctls-t9v46\",\n \"generateName\": \"tune-sysctls-\",\n \"namespace\": \"kube-system\",\n \"uid\": \"acade789-dfb7-4fba-bee6-95e98620fddb\",\n \"resourceVersion\": \"1024\",\n \"creationTimestamp\": \"2021-05-24T17:25:28Z\",\n \"labels\": {\n \"controller-revision-hash\": \"7b545968fb\",\n \"name\": \"tune-sysctls\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"tune-sysctls\",\n \"uid\": \"885fb56b-d400-4d89-8bca-fa8313ef8be9\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"sys\",\n \"hostPath\": {\n \"path\": \"/sys\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-api-access-hfd42\",\n \"projected\": {\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"name\": \"kube-root-ca.crt\",\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ]\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"path\": \"namespace\",\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n }\n }\n ]\n }\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"setsysctls\",\n \"image\": \"alpine:3.6\",\n \"command\": [\n \"sh\",\n \"-c\",\n \"while true; do\\n sysctl -w fs.inotify.max_user_watches=524288\\n sleep 10\\ndone\\n\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"sys\",\n \"mountPath\": \"/sys\"\n },\n {\n \"name\": \"kube-api-access-hfd42\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"default\",\n \"serviceAccount\": \"default\",\n \"nodeName\": \"v1.21-control-plane\",\n \"hostNetwork\": true,\n \"hostPID\": true,\n \"hostIPC\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"v1.21-control-plane\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:28Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:32Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:32Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T17:25:28Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"172.18.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.3\"\n }\n ],\n \"startTime\": \"2021-05-24T17:25:28Z\",\n \"containerStatuses\": [\n {\n \"name\": \"setsysctls\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-24T17:25:32Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/library/alpine:3.6\",\n \"imageID\": \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"containerID\": \"containerd://c82c10a84b9e45f1ff9a013e1c2d6fc0e1251894215cf51261fd5f7947add44f\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n }\n ]\n}\n==== START logs for container coredns of pod kube-system/coredns-558bd4d5db-46k4j ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7\nCoreDNS-1.8.0\nlinux/amd64, go1.15.3, 054c9ae\n[ERROR] plugin/errors: 2 homestead-prov. AAAA: read udp 10.244.1.6:40090->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 ralf. A: read udp 10.244.1.6:36579->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 ralf. A: read udp 10.244.1.6:38873->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 astaire. AAAA: read udp 10.244.1.6:37264->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead. A: read udp 10.244.1.6:34791->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead. AAAA: read udp 10.244.1.6:41817->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 sprout. A: read udp 10.244.1.6:60894->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead. AAAA: read udp 10.244.1.6:48959->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 sprout. AAAA: read udp 10.244.1.6:42348->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead-prov. AAAA: read udp 10.244.1.6:57397->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 astaire. A: read udp 10.244.1.6:35682->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 sprout. AAAA: read udp 10.244.1.6:35215->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead-prov. AAAA: read udp 10.244.1.6:37874->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead. AAAA: read udp 10.244.1.6:52780->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 astaire. A: read udp 10.244.1.6:37723->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead. AAAA: read udp 10.244.1.6:39314->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead-prov. A: read udp 10.244.1.6:56423->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead-prov. A: read udp 10.244.1.6:35292->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 sprout. A: read udp 10.244.1.6:43660->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead-prov. A: read udp 10.244.1.6:47598->172.18.0.1:53: i/o timeout\n==== END logs for container coredns of pod kube-system/coredns-558bd4d5db-46k4j ====\n==== START logs for container coredns of pod kube-system/coredns-558bd4d5db-kff7s ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7\nCoreDNS-1.8.0\nlinux/amd64, go1.15.3, 054c9ae\n[ERROR] plugin/errors: 2 homestead. AAAA: read udp 10.244.1.7:60690->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 ralf. AAAA: read udp 10.244.1.7:51187->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 sprout. AAAA: read udp 10.244.1.7:45366->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead-prov. A: read udp 10.244.1.7:44916->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 sprout. A: read udp 10.244.1.7:56814->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead. A: read udp 10.244.1.7:57002->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 ralf. AAAA: read udp 10.244.1.7:42142->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead. AAAA: read udp 10.244.1.7:52491->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 astaire. A: read udp 10.244.1.7:35370->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 astaire. A: read udp 10.244.1.7:47800->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead. A: read udp 10.244.1.7:57573->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 astaire. A: read udp 10.244.1.7:55584->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead. AAAA: read udp 10.244.1.7:46927->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead. A: read udp 10.244.1.7:48529->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 sprout. AAAA: read udp 10.244.1.7:36012->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead. A: read udp 10.244.1.7:49066->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead. A: read udp 10.244.1.7:48556->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead. A: read udp 10.244.1.7:48469->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead-prov. A: read udp 10.244.1.7:37326->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead-prov. AAAA: read udp 10.244.1.7:34466->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 sprout. A: read udp 10.244.1.7:47871->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead. AAAA: read udp 10.244.1.7:35217->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead. AAAA: read udp 10.244.1.7:57949->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead-prov. AAAA: read udp 10.244.1.7:47421->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 sprout. A: read udp 10.244.1.7:35556->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 homestead-prov. A: read udp 10.244.1.7:49669->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 sprout. A: read udp 10.244.1.7:48263->172.18.0.1:53: i/o timeout\n[ERROR] plugin/errors: 2 sprout. AAAA: read udp 10.244.1.7:52669->172.18.0.1:53: i/o timeout\n==== END logs for container coredns of pod kube-system/coredns-558bd4d5db-kff7s ====\n==== START logs for container loopdev of pod kube-system/create-loop-devs-b8n7x ====\n==== END logs for container loopdev of pod kube-system/create-loop-devs-b8n7x ====\n==== START logs for container loopdev of pod kube-system/create-loop-devs-lfj6m ====\n==== END logs for container loopdev of pod kube-system/create-loop-devs-lfj6m ====\n==== START logs for container loopdev of pod kube-system/create-loop-devs-zpb97 ====\n==== END logs for container loopdev of pod kube-system/create-loop-devs-zpb97 ====\n==== START logs for container etcd of pod kube-system/etcd-v1.21-control-plane ====\n[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead\n2021-05-24 17:23:49.351899 I | etcdmain: etcd Version: 3.4.13\n2021-05-24 17:23:49.351940 I | etcdmain: Git SHA: ae9734ed2\n2021-05-24 17:23:49.351944 I | etcdmain: Go Version: go1.12.17\n2021-05-24 17:23:49.351951 I | etcdmain: Go OS/Arch: linux/amd64\n2021-05-24 17:23:49.351955 I | etcdmain: setting maximum number of CPUs to 88, total number of available CPUs is 88\n[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead\n2021-05-24 17:23:49.352076 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = \n2021-05-24 17:23:49.352987 I | embed: name = v1.21-control-plane\n2021-05-24 17:23:49.353004 I | embed: data dir = /var/lib/etcd\n2021-05-24 17:23:49.353009 I | embed: member dir = /var/lib/etcd/member\n2021-05-24 17:23:49.353013 I | embed: heartbeat = 100ms\n2021-05-24 17:23:49.353016 I | embed: election = 1000ms\n2021-05-24 17:23:49.353020 I | embed: snapshot count = 10000\n2021-05-24 17:23:49.353028 I | embed: advertise client URLs = https://172.18.0.3:2379\n2021-05-24 17:23:49.361429 I | etcdserver: starting member 23da9c3f2594532a in cluster d4a51ce2d5480c89\nraft2021/05/24 17:23:49 INFO: 23da9c3f2594532a switched to configuration voters=()\nraft2021/05/24 17:23:49 INFO: 23da9c3f2594532a became follower at term 0\nraft2021/05/24 17:23:49 INFO: newRaft 23da9c3f2594532a [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\nraft2021/05/24 17:23:49 INFO: 23da9c3f2594532a became follower at term 1\nraft2021/05/24 17:23:49 INFO: 23da9c3f2594532a switched to configuration voters=(2583549131277751082)\n2021-05-24 17:23:49.363047 W | auth: simple token is not cryptographically signed\n2021-05-24 17:23:49.366581 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]\n2021-05-24 17:23:49.367070 I | etcdserver: 23da9c3f2594532a as single-node; fast-forwarding 9 ticks (election ticks 10)\nraft2021/05/24 17:23:49 INFO: 23da9c3f2594532a switched to configuration voters=(2583549131277751082)\n2021-05-24 17:23:49.367399 I | etcdserver/membership: added member 23da9c3f2594532a [https://172.18.0.3:2380] to cluster d4a51ce2d5480c89\n2021-05-24 17:23:49.369567 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = \n2021-05-24 17:23:49.369648 I | embed: listening for peers on 172.18.0.3:2380\n2021-05-24 17:23:49.369770 I | embed: listening for metrics on http://127.0.0.1:2381\nraft2021/05/24 17:23:49 INFO: 23da9c3f2594532a is starting a new election at term 1\nraft2021/05/24 17:23:49 INFO: 23da9c3f2594532a became candidate at term 2\nraft2021/05/24 17:23:49 INFO: 23da9c3f2594532a received MsgVoteResp from 23da9c3f2594532a at term 2\nraft2021/05/24 17:23:49 INFO: 23da9c3f2594532a became leader at term 2\nraft2021/05/24 17:23:49 INFO: raft.node: 23da9c3f2594532a elected leader 23da9c3f2594532a at term 2\n2021-05-24 17:23:49.862720 I | etcdserver: setting up the initial cluster version to 3.4\n2021-05-24 17:23:49.863432 N | etcdserver/membership: set the initial cluster version to 3.4\n2021-05-24 17:23:49.863529 I | embed: ready to serve client requests\n2021-05-24 17:23:49.863685 I | etcdserver: published {Name:v1.21-control-plane ClientURLs:[https://172.18.0.3:2379]} to cluster d4a51ce2d5480c89\n2021-05-24 17:23:49.863730 I | etcdserver/api: enabled capabilities for version 3.4\n2021-05-24 17:23:49.863757 I | embed: ready to serve client requests\n2021-05-24 17:23:49.866363 I | embed: serving client requests on 127.0.0.1:2379\n2021-05-24 17:23:49.866474 I | embed: serving client requests on 172.18.0.3:2379\n2021-05-24 17:24:16.430699 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:24:17.328077 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:24:27.329077 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:24:30.908560 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:9147\" took too long (111.401022ms) to execute\n2021-05-24 17:24:37.332248 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:24:47.328539 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:24:57.328701 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:25:07.328805 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:25:17.329018 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:25:27.328408 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:25:37.329040 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:25:47.329198 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:25:57.328707 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:26:07.329041 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:26:17.329403 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:26:27.328227 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:26:37.329062 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:26:47.328415 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:26:57.328231 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:27:07.328662 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:27:17.329074 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:27:27.328495 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:27:37.328694 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:27:47.328483 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:27:57.328501 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:28:07.328132 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:28:17.328714 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:28:27.328559 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:28:37.328576 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:28:47.329201 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:28:57.328580 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:29:07.328603 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:29:17.329101 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:29:27.328585 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:29:37.328202 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:29:47.329049 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:29:57.328642 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:30:07.328799 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:30:17.329415 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:30:27.328768 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:30:37.328988 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:30:47.329711 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:30:57.328823 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:31:07.328420 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:31:17.328953 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:31:27.328701 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:31:37.329088 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:31:47.329107 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:31:57.328826 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:32:07.328451 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:32:17.329143 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:32:27.328659 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:32:37.328641 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:32:47.328681 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:32:57.329543 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:33:06.298786 I | etcdserver: start to snapshot (applied: 10001, lastsnap: 0)\n2021-05-24 17:33:06.301867 I | etcdserver: saved snapshot at index 10001\n2021-05-24 17:33:06.302615 I | etcdserver: compacted raft log at 5001\n2021-05-24 17:33:07.328483 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:33:17.328598 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:33:27.328016 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:33:37.328861 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:33:47.328387 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:33:49.886757 I | mvcc: store.index: compact 1766\n2021-05-24 17:33:49.909981 I | mvcc: finished scheduled compaction at 1766 (took 19.866422ms)\n2021-05-24 17:33:57.328864 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:34:07.328957 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:34:17.328553 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:34:27.328253 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:34:37.328802 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:34:47.328710 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:34:57.328461 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:35:07.329092 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:35:17.328701 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:35:27.328814 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:35:37.328666 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:35:47.329166 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:35:57.329159 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:36:07.329311 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:36:17.327950 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:36:27.328727 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:36:37.329071 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:36:47.328651 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:36:57.328791 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:37:07.329045 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:37:17.328718 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:37:27.328815 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:37:37.328918 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:37:47.328386 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:37:57.328195 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:38:07.328188 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:38:17.329067 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:38:27.328460 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:38:37.329026 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:38:47.328070 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:38:49.891738 I | mvcc: store.index: compact 11123\n2021-05-24 17:38:50.049113 I | mvcc: finished scheduled compaction at 11123 (took 150.192469ms)\n2021-05-24 17:38:57.328091 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:39:07.328078 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:39:17.328244 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:39:27.328542 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:39:37.328528 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:39:47.328773 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:39:57.328225 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:40:07.328693 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:40:17.328930 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:40:27.328631 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:40:37.328892 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:40:47.328389 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:40:57.328982 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:41:07.329050 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:41:17.328923 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:41:27.328803 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:41:37.328816 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:41:47.328062 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:41:57.328503 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:42:07.327947 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:42:17.329189 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:42:27.329079 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:42:37.328688 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:42:47.328655 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:42:57.328518 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:43:07.329046 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:43:17.329085 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:43:27.328506 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:43:37.328525 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:43:47.328889 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:43:49.897366 I | mvcc: store.index: compact 13510\n2021-05-24 17:43:49.944928 I | mvcc: finished scheduled compaction at 13510 (took 43.435558ms)\n2021-05-24 17:43:57.329125 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:44:00.393942 I | etcdserver: start to snapshot (applied: 20002, lastsnap: 10001)\n2021-05-24 17:44:00.396196 I | etcdserver: saved snapshot at index 20002\n2021-05-24 17:44:00.396647 I | etcdserver: compacted raft log at 15002\n2021-05-24 17:44:07.329099 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:44:17.329424 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:44:27.329972 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:44:37.329483 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:44:47.328955 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:44:57.328629 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:45:07.328841 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:45:17.328177 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:45:27.328678 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:45:37.328898 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:45:47.329201 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:45:57.329159 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:46:07.329258 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:46:17.328248 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:46:27.328725 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:46:37.329074 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:46:47.328274 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:46:57.329031 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:47:07.328099 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:47:17.328415 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:47:27.328982 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:47:37.328376 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:47:47.328061 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:47:57.328446 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:48:07.329219 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:48:17.329061 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:48:27.328208 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:48:37.328703 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:48:47.329157 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:48:49.902446 I | mvcc: store.index: compact 19101\n2021-05-24 17:48:49.995978 I | mvcc: finished scheduled compaction at 19101 (took 88.038694ms)\n2021-05-24 17:48:57.328778 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:49:07.329047 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:49:17.328365 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:49:27.328709 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:49:37.328050 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:49:47.328890 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:49:57.328998 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:50:07.328921 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:50:17.328753 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:50:27.328344 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:50:37.328220 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:50:47.328626 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:50:57.329117 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:51:07.328515 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:51:17.328130 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:51:27.329324 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:51:37.328609 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:51:47.328429 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:51:57.328538 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:52:07.329064 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:52:17.328622 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:52:27.328874 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:52:37.328186 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:52:47.328809 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:52:57.327951 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:53:07.329053 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:53:17.328658 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:53:27.328195 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:53:37.329175 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:53:47.328694 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:53:49.907421 I | mvcc: store.index: compact 25227\n2021-05-24 17:53:50.011694 I | mvcc: finished scheduled compaction at 25227 (took 100.143003ms)\n2021-05-24 17:53:57.328774 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:54:07.328655 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:54:17.328908 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:54:27.328000 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:54:37.328816 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:54:47.328550 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:54:57.329879 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:55:07.328275 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:55:17.328650 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:55:27.328511 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:55:37.329160 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:55:47.328917 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:55:57.328855 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:56:07.328983 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:56:17.328053 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:56:27.329192 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:56:37.328515 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:56:47.328690 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:56:57.329246 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:57:07.329153 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:57:17.328492 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:57:27.328702 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:57:37.328682 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:57:47.329638 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:57:57.329213 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:58:07.328442 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:58:17.329238 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:58:27.328875 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:58:37.329155 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:58:47.328097 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:58:49.910853 I | mvcc: store.index: compact 25943\n2021-05-24 17:58:49.925455 I | mvcc: finished scheduled compaction at 25943 (took 13.827526ms)\n2021-05-24 17:58:57.329173 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:59:07.328478 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:59:17.328057 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:59:27.328464 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:59:37.329048 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:59:47.328415 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 17:59:57.328708 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:00:07.328596 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:00:17.328720 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:00:27.329115 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:00:37.329084 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:00:47.328215 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:00:57.328846 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:01:07.329259 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:01:17.328642 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:01:27.329228 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:01:37.328316 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:01:47.328962 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:01:57.328106 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:02:07.328519 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:02:17.329191 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:02:27.328814 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:02:37.329187 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:02:47.329047 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:02:57.328548 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:03:07.329178 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:03:17.328183 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:03:27.328424 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:03:37.328879 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:03:47.328541 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:03:49.914852 I | mvcc: store.index: compact 26663\n2021-05-24 18:03:49.929345 I | mvcc: finished scheduled compaction at 26663 (took 13.708943ms)\n2021-05-24 18:03:57.329065 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:04:07.328107 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:04:17.328786 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:04:27.328216 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:04:37.328186 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:04:47.328984 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:04:57.328486 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:05:07.328830 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:05:17.328399 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:05:27.328439 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:05:37.328652 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:05:47.328655 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:05:57.329109 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:06:07.328745 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:06:17.328390 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:06:27.328731 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:06:37.329147 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:06:47.328443 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:06:57.328097 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:07:07.329021 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:07:17.329124 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:07:27.328988 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:07:37.328414 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:07:47.328303 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:07:57.328663 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:08:07.329026 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:08:17.329063 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:08:27.328784 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:08:37.329045 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:08:47.328830 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:08:49.918438 I | mvcc: store.index: compact 27376\n2021-05-24 18:08:49.932790 I | mvcc: finished scheduled compaction at 27376 (took 13.496051ms)\n2021-05-24 18:08:57.329296 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:09:07.329096 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:09:17.329092 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:09:27.328743 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:09:37.328669 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:09:47.329004 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:09:57.329124 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:10:07.328433 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:10:17.329036 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:10:27.328898 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:10:37.328477 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:10:47.329181 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:10:57.328686 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:11:07.328339 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:11:17.328264 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:11:27.328633 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:11:37.328060 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:11:47.328111 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:11:57.328465 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:12:07.329057 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:12:17.329139 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:12:27.328065 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:12:37.328739 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:12:47.329157 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:12:57.328478 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:13:07.328082 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:13:14.895919 I | etcdserver: start to snapshot (applied: 30003, lastsnap: 20002)\n2021-05-24 18:13:14.898242 I | etcdserver: saved snapshot at index 30003\n2021-05-24 18:13:14.899213 I | etcdserver: compacted raft log at 25003\n2021-05-24 18:13:17.327994 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:13:27.328196 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:13:37.328083 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:13:47.329167 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:13:49.923136 I | mvcc: store.index: compact 28092\n2021-05-24 18:13:49.937694 I | mvcc: finished scheduled compaction at 28092 (took 13.695509ms)\n2021-05-24 18:13:57.328759 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:14:07.329151 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:14:17.329101 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:14:27.329204 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:14:37.328216 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:14:47.328048 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:14:57.328533 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:15:07.328439 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:15:17.328520 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:15:27.328786 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:15:37.328613 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:15:47.328717 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:15:57.328802 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:16:07.328580 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:16:17.328511 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:16:27.329137 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:16:37.328649 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:16:47.329073 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:16:57.328628 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:17:07.329126 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:17:17.328604 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:17:27.329216 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:17:37.329002 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:17:47.330233 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:17:57.328713 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:18:07.328580 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:18:17.328524 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:18:27.328959 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:18:37.329091 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:18:47.328164 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:18:49.927580 I | mvcc: store.index: compact 28812\n2021-05-24 18:18:49.942358 I | mvcc: finished scheduled compaction at 28812 (took 13.891935ms)\n2021-05-24 18:18:57.328766 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:19:07.329156 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:19:17.328414 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:19:27.328786 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:19:37.330933 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:19:47.328414 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:19:57.328073 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:20:07.328105 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:20:17.328762 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:20:27.328465 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:20:37.328250 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:20:47.329100 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:20:57.328593 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:21:07.328461 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:21:17.328483 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:21:27.328922 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:21:37.329084 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:21:47.328448 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:21:57.329042 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:22:07.328194 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:22:17.328514 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:22:27.328295 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:22:37.329244 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:22:47.328732 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:22:57.328947 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:23:07.328038 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:23:17.328387 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:23:27.329113 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:23:37.328494 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:23:47.328767 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:23:49.932198 I | mvcc: store.index: compact 29528\n2021-05-24 18:23:49.946577 I | mvcc: finished scheduled compaction at 29528 (took 13.581669ms)\n2021-05-24 18:23:57.328208 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:24:07.329066 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:24:17.329028 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:24:27.328264 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:24:37.329126 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:24:47.329104 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:24:57.329098 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:25:07.328239 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:25:17.328847 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:25:27.329147 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:25:37.328125 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:25:47.329051 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:25:57.328466 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:26:07.328577 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:26:17.329193 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:26:27.328056 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:26:37.329144 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:26:47.329007 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:26:57.328999 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:27:07.328293 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:27:17.328466 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:27:27.328516 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:27:37.328098 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:27:47.329058 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:27:57.329857 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:28:07.328999 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:28:17.328621 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:28:27.328729 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:28:37.328460 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:28:47.328538 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:28:49.936724 I | mvcc: store.index: compact 30245\n2021-05-24 18:28:49.951464 I | mvcc: finished scheduled compaction at 30245 (took 13.773394ms)\n2021-05-24 18:28:57.329093 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:29:07.328052 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:29:17.328182 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:29:27.328970 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:29:37.329350 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:29:47.328064 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:29:57.328402 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:30:07.328817 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:30:17.328786 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:30:27.329196 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:30:37.329191 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:30:47.329000 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:30:57.328476 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:31:07.328378 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:31:17.328193 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:31:27.328655 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:31:37.329064 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:31:47.328244 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:31:57.329058 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:32:07.329199 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:32:17.328799 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:32:27.329147 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:32:37.328458 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:32:47.328622 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:32:57.328548 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:33:07.328274 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:33:17.328573 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:33:27.329146 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:33:37.328806 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:33:47.328576 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:33:49.940733 I | mvcc: store.index: compact 30965\n2021-05-24 18:33:49.958619 I | mvcc: finished scheduled compaction at 30965 (took 16.788907ms)\n2021-05-24 18:33:57.328625 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:34:07.329067 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:34:17.329053 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:34:27.328174 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:34:37.329031 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:34:47.329128 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:34:57.328749 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:35:07.328768 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:35:17.328199 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:35:27.329152 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:35:37.328769 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:35:47.329109 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:35:57.328275 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:36:07.328743 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:36:17.328508 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:36:27.328638 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:36:37.328288 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:36:47.329030 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:36:57.328458 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:37:07.328345 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:37:17.328288 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:37:27.328487 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:37:37.328653 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:37:47.328297 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:37:57.329188 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:38:07.328596 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:38:17.328311 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:38:27.328749 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:38:37.329155 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:38:47.329137 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:38:49.945094 I | mvcc: store.index: compact 31685\n2021-05-24 18:38:49.959786 I | mvcc: finished scheduled compaction at 31685 (took 13.95922ms)\n2021-05-24 18:38:57.329182 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:39:07.329228 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:39:17.328763 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:39:27.328191 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:39:37.329002 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:39:47.329081 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:39:57.328316 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:40:07.328461 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:40:17.329080 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:40:27.328835 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:40:37.328615 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:40:47.328635 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:40:57.328576 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:41:07.328670 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:41:17.328774 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:41:27.329037 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:41:37.329029 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:41:47.328656 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:41:57.328296 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:42:07.329379 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:42:17.328836 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:42:27.328114 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:42:37.329118 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:42:47.328999 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:42:57.328088 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:43:07.328578 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:43:17.329030 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:43:27.328301 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:43:37.328945 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:43:47.328507 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:43:49.949366 I | mvcc: store.index: compact 32406\n2021-05-24 18:43:49.963418 I | mvcc: finished scheduled compaction at 32406 (took 13.466667ms)\n2021-05-24 18:43:57.329095 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:44:07.328717 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:44:17.328860 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:44:27.328727 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:44:37.329133 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:44:47.329179 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:44:57.328957 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:45:07.328300 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:45:17.328410 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:45:27.328182 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:45:37.328722 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:45:47.328728 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:45:57.328300 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:46:07.328878 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:46:17.328562 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:46:27.328119 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:46:37.328629 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:46:47.328496 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:46:57.329055 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:47:07.329082 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:47:17.328824 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:47:27.328023 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:47:37.329037 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:47:47.328990 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:47:57.328222 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:48:07.328204 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:48:17.329240 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:48:27.329253 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:48:37.328180 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:48:47.328919 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:48:49.953783 I | mvcc: store.index: compact 33121\n2021-05-24 18:48:49.968202 I | mvcc: finished scheduled compaction at 33121 (took 13.76144ms)\n2021-05-24 18:48:57.329098 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:49:07.328397 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:49:17.327947 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:49:27.328701 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:49:37.329250 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:49:47.328105 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:49:57.328802 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:50:07.329069 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:50:17.328692 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:50:27.328459 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:50:37.329031 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:50:47.328638 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:50:57.328175 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:51:07.329220 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:51:17.328641 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:51:27.328735 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:51:37.328765 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:51:47.328388 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:51:57.329028 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:52:07.329165 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:52:17.328218 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:52:27.328542 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:52:37.329220 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:52:47.329042 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:52:57.328864 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:53:07.328334 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:53:17.329107 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:53:27.328536 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:53:37.329067 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:53:47.328724 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:53:49.957717 I | mvcc: store.index: compact 33845\n2021-05-24 18:53:49.972514 I | mvcc: finished scheduled compaction at 33845 (took 13.951844ms)\n2021-05-24 18:53:57.329079 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:54:07.328389 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:54:17.329230 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:54:27.328610 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:54:37.329214 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:54:47.328021 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:54:57.328787 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:55:07.328700 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:55:17.328113 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:55:27.328418 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:55:37.329423 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:55:47.328520 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:55:57.328085 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:56:07.328247 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:56:17.328958 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:56:27.329186 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:56:37.328309 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:56:47.328952 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:56:57.328207 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:57:07.328606 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:57:17.328890 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:57:27.328641 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:57:37.328101 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:57:47.328943 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:57:57.329034 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:58:07.328187 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:58:17.328773 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:58:27.329113 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:58:37.328450 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:58:47.328796 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:58:49.962039 I | mvcc: store.index: compact 35216\n2021-05-24 18:58:49.978883 I | mvcc: finished scheduled compaction at 35216 (took 15.642788ms)\n2021-05-24 18:58:57.328133 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:59:07.328023 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:59:17.329489 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:59:27.329037 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:59:37.328995 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:59:47.328921 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 18:59:57.329038 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:00:07.328128 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:00:17.328913 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:00:27.328355 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:00:37.328830 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:00:47.328819 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:00:57.328416 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:01:07.329060 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:01:17.329085 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:01:27.327942 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:01:37.329323 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:01:47.328892 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:01:57.328759 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:02:07.329158 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:02:17.328191 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:02:27.328391 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:02:37.329072 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:02:47.328682 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:02:57.329193 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:03:07.328585 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:03:17.328278 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:03:27.328767 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:03:37.328350 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:03:47.328707 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:03:48.989537 I | etcdserver: start to snapshot (applied: 40004, lastsnap: 30003)\n2021-05-24 19:03:48.992381 I | etcdserver: saved snapshot at index 40004\n2021-05-24 19:03:48.993118 I | etcdserver: compacted raft log at 35004\n2021-05-24 19:03:49.965693 I | mvcc: store.index: compact 36185\n2021-05-24 19:03:49.981324 I | mvcc: finished scheduled compaction at 36185 (took 14.497541ms)\n2021-05-24 19:03:57.329128 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:04:07.329223 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:04:17.329188 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:04:27.328735 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:04:37.328641 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:04:47.329047 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:04:57.328805 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:05:07.329156 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:05:17.328693 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:05:27.328717 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:05:37.328799 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:05:47.328541 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:05:57.328570 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:06:07.328610 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:06:17.328979 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:06:27.329161 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:06:37.328614 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:06:47.328960 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:06:57.328373 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:07:07.328568 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:07:17.328684 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:07:27.329297 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:07:37.328263 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:07:47.329042 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:07:51.816686 I | etcdserver: start to snapshot (applied: 50005, lastsnap: 40004)\n2021-05-24 19:07:51.819127 I | etcdserver: saved snapshot at index 50005\n2021-05-24 19:07:51.819462 I | etcdserver: compacted raft log at 45005\n2021-05-24 19:07:57.328513 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:08:07.328626 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:08:17.329372 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:08:27.328133 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:08:37.329017 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:08:47.328107 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:08:49.970320 I | mvcc: store.index: compact 37799\n2021-05-24 19:08:50.006812 I | mvcc: finished scheduled compaction at 37799 (took 31.639181ms)\n2021-05-24 19:08:57.328861 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:09:07.329116 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:09:17.328871 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:09:27.329098 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:09:37.329100 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:09:47.328737 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:09:49.318059 I | wal: segmented wal file /var/lib/etcd/member/wal/0000000000000001-000000000000cf98.wal is created\n2021-05-24 19:09:57.329018 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:10:07.328272 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:10:17.328535 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:10:27.328894 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:10:37.327986 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:10:47.328807 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:10:57.328720 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:11:07.328828 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:11:17.328529 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:11:27.328678 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:11:37.328657 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:11:47.328761 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:11:57.327945 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:12:07.328466 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:12:17.328478 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:12:27.328080 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:12:37.328240 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:12:47.328983 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:12:57.329069 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:13:07.328362 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:13:17.328846 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:13:27.329099 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:13:37.328670 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:13:47.328875 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:13:49.974918 I | mvcc: store.index: compact 49227\n2021-05-24 19:13:50.163006 I | mvcc: finished scheduled compaction at 49227 (took 177.289012ms)\n2021-05-24 19:13:57.328213 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:14:07.328689 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:14:17.328769 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:14:27.329134 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:14:37.328833 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:14:47.328289 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:14:57.328732 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:15:07.329068 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:15:17.328197 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:15:18.045337 I | etcdserver: start to snapshot (applied: 60006, lastsnap: 50005)\n2021-05-24 19:15:18.047807 I | etcdserver: saved snapshot at index 60006\n2021-05-24 19:15:18.048610 I | etcdserver: compacted raft log at 55006\n2021-05-24 19:15:19.445346 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-0000000000002711.snap successfully\n2021-05-24 19:15:27.329067 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:15:37.328326 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:15:47.328838 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:15:57.328885 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:16:07.328747 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:16:17.328702 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:16:27.329124 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:16:37.329212 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:16:47.328528 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:16:57.328422 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:17:07.328615 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:17:17.328568 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:17:27.328464 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:17:37.328946 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:17:47.328598 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:17:57.329128 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:18:07.328116 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:18:17.329396 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:18:27.328566 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:18:37.328766 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:18:47.328447 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:18:49.979416 I | mvcc: store.index: compact 56758\n2021-05-24 19:18:50.102541 I | mvcc: finished scheduled compaction at 56758 (took 117.60224ms)\n2021-05-24 19:18:57.329073 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:19:07.328451 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:19:17.328397 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:19:27.329095 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:19:37.329064 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:19:47.328209 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:19:57.328020 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:20:07.328528 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:20:17.328472 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:20:27.328764 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:20:37.328865 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:20:47.328731 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:20:57.329055 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:21:07.329082 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:21:17.329128 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:21:27.328640 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:21:37.328561 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:21:47.328596 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:21:57.328737 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:22:07.328094 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:22:17.328238 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:22:27.329232 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:22:37.329150 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:22:47.328925 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:22:57.328855 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:23:07.328872 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:23:17.328527 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:23:27.328763 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:23:37.329053 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:23:47.328954 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:23:49.983723 I | mvcc: store.index: compact 58254\n2021-05-24 19:23:50.000716 I | mvcc: finished scheduled compaction at 58254 (took 15.66424ms)\n2021-05-24 19:23:57.328664 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:24:07.329173 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:24:17.328965 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:24:27.328846 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:24:37.328111 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:24:47.329002 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:24:57.328802 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:25:07.328988 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:25:17.328792 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:25:27.328060 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:25:37.328585 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:25:47.328818 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:25:57.328975 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:26:07.328626 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:26:17.329028 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:26:27.329076 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:26:37.328889 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:26:47.329124 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:26:57.328333 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:27:07.328874 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:27:17.328245 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:27:27.329148 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:27:37.329076 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:27:47.329057 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:27:57.328885 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:28:07.328559 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:28:17.328054 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:28:27.328273 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:28:37.328796 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:28:47.329176 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:28:49.987239 I | mvcc: store.index: compact 59361\n2021-05-24 19:28:50.005073 I | mvcc: finished scheduled compaction at 59361 (took 15.516778ms)\n2021-05-24 19:28:57.329023 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:29:07.328981 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:29:17.328545 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:29:27.328454 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:29:37.328481 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:29:47.328283 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:29:57.329140 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:30:07.329302 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:30:17.327951 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:30:27.329221 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:30:37.329027 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:30:47.328497 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:30:57.329076 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:31:07.328861 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:31:17.328212 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:31:27.329303 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:31:37.328566 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:31:47.328510 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:31:57.328771 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:32:07.328212 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:32:17.328180 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:32:27.328826 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:32:37.328343 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:32:47.329000 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:32:57.328654 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:33:07.329037 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:33:17.328765 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:33:27.328206 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:33:37.329183 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:33:47.328466 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:33:49.992176 I | mvcc: store.index: compact 61869\n2021-05-24 19:33:50.025198 I | mvcc: finished scheduled compaction at 61869 (took 30.584496ms)\n2021-05-24 19:33:57.328747 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:34:07.328021 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:34:17.329022 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:34:27.328718 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:34:37.328127 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:34:47.328203 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:34:57.329174 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:35:07.328301 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:35:17.329103 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:35:27.329229 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:35:37.328940 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:35:47.328002 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:35:57.328050 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:36:07.328565 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:36:17.328964 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:36:27.329112 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:36:37.330944 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:36:47.328178 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:36:57.328315 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:37:07.329216 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:37:17.328434 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:37:27.329061 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:37:37.329089 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:37:47.329111 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:37:57.328691 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:38:07.328644 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:38:17.329114 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:38:27.328842 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:38:37.328394 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:38:47.328890 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:38:49.996983 I | mvcc: store.index: compact 64028\n2021-05-24 19:38:50.031390 I | mvcc: finished scheduled compaction at 64028 (took 32.415796ms)\n2021-05-24 19:38:57.329009 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:39:07.328518 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:39:17.328260 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:39:27.328893 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:39:37.328577 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:39:47.328202 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:39:57.327999 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:40:00.151996 I | etcdserver: start to snapshot (applied: 70007, lastsnap: 60006)\n2021-05-24 19:40:00.153983 I | etcdserver: saved snapshot at index 70007\n2021-05-24 19:40:00.154510 I | etcdserver: compacted raft log at 65007\n2021-05-24 19:40:07.328071 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:40:17.328663 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:40:19.467298 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-0000000000004e22.snap successfully\n2021-05-24 19:40:27.329219 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:40:37.328231 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:40:47.328430 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:40:57.328805 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:41:07.329069 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:41:17.329104 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:41:27.329232 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:41:37.329031 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:41:47.328366 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:41:57.328098 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:42:07.328394 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:42:17.329415 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:42:27.328233 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:42:37.328908 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:42:47.328961 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:42:57.328695 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:43:07.328373 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:43:17.328951 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:43:27.329437 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:43:37.327982 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:43:47.328777 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:43:50.001500 I | mvcc: store.index: compact 64741\n2021-05-24 19:43:50.017910 I | mvcc: finished scheduled compaction at 64741 (took 13.167669ms)\n2021-05-24 19:43:57.329165 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:44:07.328797 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:44:17.329034 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:44:27.328546 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:44:37.328341 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:44:47.329175 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:44:57.329397 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:45:07.329243 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:45:17.329109 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:45:27.329137 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:45:37.329059 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:45:47.328255 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:45:57.328865 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:46:07.328812 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:46:17.328367 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:46:27.328546 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:46:37.328758 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:46:47.328887 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:46:57.329037 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:47:07.328945 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:47:17.329094 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:47:27.328355 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:47:37.329133 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:47:47.329291 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:47:57.328654 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:48:07.328481 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:48:17.328993 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:48:27.328382 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:48:37.329172 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:48:47.329060 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:48:50.006069 I | mvcc: store.index: compact 73320\n2021-05-24 19:48:50.143387 I | mvcc: finished scheduled compaction at 73320 (took 131.694948ms)\n2021-05-24 19:48:57.329165 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:49:07.328121 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:49:17.328474 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:49:27.328821 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:49:37.328626 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:49:47.328193 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:49:57.328817 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:50:07.328933 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:50:17.328849 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:50:27.328663 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:50:37.328049 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:50:47.328726 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:50:57.329141 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:51:07.329016 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:51:17.329220 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:51:27.328063 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:51:37.328601 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:51:47.328680 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:51:57.329068 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:52:07.329124 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:52:17.328975 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:52:27.329006 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:52:37.329164 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:52:47.328103 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:52:57.329099 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:53:07.328463 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:53:17.329667 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:53:27.328796 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:53:37.328286 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:53:47.328933 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:53:50.010644 I | mvcc: store.index: compact 75037\n2021-05-24 19:53:50.040781 I | mvcc: finished scheduled compaction at 75037 (took 28.803799ms)\n2021-05-24 19:53:57.329105 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:54:07.328449 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:54:17.328065 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:54:27.328593 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:54:37.328641 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:54:47.328129 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:54:57.328675 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:55:07.329065 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:55:16.494039 I | etcdserver: start to snapshot (applied: 80008, lastsnap: 70007)\n2021-05-24 19:55:16.496288 I | etcdserver: saved snapshot at index 80008\n2021-05-24 19:55:16.497018 I | etcdserver: compacted raft log at 75008\n2021-05-24 19:55:17.328560 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:55:19.479088 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-0000000000007533.snap successfully\n2021-05-24 19:55:26.713758 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:55:27.073313 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:55:27.328242 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:55:37.328669 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:55:47.328785 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:55:57.328260 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:56:07.328238 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:56:17.329170 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:56:27.328640 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:56:37.328638 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:56:47.329039 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:56:57.328044 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:57:07.328455 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:57:17.329080 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:57:27.328119 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:57:37.328672 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:57:47.328355 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:57:57.329168 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:58:07.328377 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:58:17.328030 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:58:27.328460 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:58:37.329097 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:58:47.329005 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:58:50.014568 I | mvcc: store.index: compact 75757\n2021-05-24 19:58:50.029544 I | mvcc: finished scheduled compaction at 75757 (took 13.750981ms)\n2021-05-24 19:58:57.329262 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:59:07.328876 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:59:17.328801 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:59:27.328899 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:59:37.329018 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:59:47.329098 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 19:59:57.328598 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:00:07.329175 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:00:17.329121 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:00:27.328428 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:00:37.328901 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:00:47.329270 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:00:57.329112 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:01:07.328228 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:01:17.329035 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:01:27.328234 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:01:37.328508 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:01:47.328298 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:01:57.328608 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:02:07.328632 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:02:17.328439 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:02:27.329269 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:02:37.328341 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:02:47.328672 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:02:57.328513 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:03:07.329038 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:03:17.328258 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:03:27.328569 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:03:37.328585 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:03:47.328996 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:03:50.019320 I | mvcc: store.index: compact 77844\n2021-05-24 20:03:50.051940 I | mvcc: finished scheduled compaction at 77844 (took 29.726797ms)\n2021-05-24 20:03:57.328747 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:04:07.329113 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:04:17.328568 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:04:27.328547 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:04:37.328936 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:04:43.406942 I | etcdserver: start to snapshot (applied: 90009, lastsnap: 80008)\n2021-05-24 20:04:43.409856 I | etcdserver: saved snapshot at index 90009\n2021-05-24 20:04:43.410399 I | etcdserver: compacted raft log at 85009\n2021-05-24 20:04:47.328868 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:04:49.485386 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-0000000000009c44.snap successfully\n2021-05-24 20:04:57.329250 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:05:07.328784 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:05:17.329037 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:05:27.328265 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:05:37.328809 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:05:43.962020 I | wal: segmented wal file /var/lib/etcd/member/wal/0000000000000002-00000000000169b6.wal is created\n2021-05-24 20:05:47.328521 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:05:57.328235 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:06:07.328545 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:06:17.328680 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:06:27.328833 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:06:37.328952 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:06:47.329178 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:06:57.328640 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:07:07.329064 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:07:17.329293 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:07:27.329014 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:07:37.328860 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:07:47.328627 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:07:57.328625 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:08:07.329832 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:08:17.328724 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:08:27.329862 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:08:37.328364 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:08:37.507999 I | etcdserver: start to snapshot (applied: 100010, lastsnap: 90009)\n2021-05-24 20:08:37.510574 I | etcdserver: saved snapshot at index 100010\n2021-05-24 20:08:37.510942 I | etcdserver: compacted raft log at 95010\n2021-05-24 20:08:47.330691 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:08:49.487678 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-000000000000c355.snap successfully\n2021-05-24 20:08:50.024086 I | mvcc: store.index: compact 83613\n2021-05-24 20:08:50.114575 I | mvcc: finished scheduled compaction at 83613 (took 84.452073ms)\n2021-05-24 20:08:57.330224 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:09:07.329001 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:09:17.329695 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:09:27.329646 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:09:37.330017 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:09:47.329666 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:09:57.329802 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:10:07.329861 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:10:17.329653 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:10:27.330001 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:10:37.328923 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:10:47.329641 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:10:57.330263 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:11:07.329687 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:11:17.330542 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:11:27.329422 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:11:29.409797 I | wal: segmented wal file /var/lib/etcd/member/wal/0000000000000003-000000000001a35b.wal is created\n2021-05-24 20:11:37.329249 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:11:47.329247 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:11:57.328503 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:12:07.329310 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:12:17.329078 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:12:27.329621 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:12:37.329678 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:12:47.329392 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:12:57.329522 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:13:07.329150 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:13:08.840801 I | etcdserver: start to snapshot (applied: 110011, lastsnap: 100010)\n2021-05-24 20:13:08.843320 I | etcdserver: saved snapshot at index 110011\n2021-05-24 20:13:08.843600 I | etcdserver: compacted raft log at 105011\n2021-05-24 20:13:17.328606 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:13:19.490463 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-000000000000ea66.snap successfully\n2021-05-24 20:13:25.340322 I | etcdserver: start to snapshot (applied: 120012, lastsnap: 110011)\n2021-05-24 20:13:25.344665 I | etcdserver: saved snapshot at index 120012\n2021-05-24 20:13:25.345034 I | etcdserver: compacted raft log at 115012\n2021-05-24 20:13:27.328262 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:13:28.974155 I | wal: segmented wal file /var/lib/etcd/member/wal/0000000000000004-000000000001ddbc.wal is created\n2021-05-24 20:13:37.329324 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:13:47.329244 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:13:49.490874 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-0000000000011177.snap successfully\n2021-05-24 20:13:50.027896 I | mvcc: store.index: compact 96331\n2021-05-24 20:13:50.286830 I | mvcc: finished scheduled compaction at 96331 (took 247.033856ms)\n2021-05-24 20:13:57.329012 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:14:07.328778 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:14:08.243209 I | etcdserver: start to snapshot (applied: 130013, lastsnap: 120012)\n2021-05-24 20:14:08.245482 I | etcdserver: saved snapshot at index 130013\n2021-05-24 20:14:08.245943 I | etcdserver: compacted raft log at 125013\n2021-05-24 20:14:17.328593 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:14:19.491253 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-0000000000013888.snap successfully\n2021-05-24 20:14:27.328843 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:14:37.328568 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:14:47.329203 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:14:57.328440 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:15:07.329015 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:15:17.328731 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:15:27.329266 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:15:37.328642 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:15:47.328754 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:15:57.328453 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:16:07.328122 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:16:17.328877 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:16:27.329037 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:16:37.329778 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:16:47.328683 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:16:57.328589 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:17:07.328799 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:17:17.329129 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:17:27.328427 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:17:37.328440 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:17:47.328776 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:17:57.328967 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:18:07.328442 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:18:17.329164 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:18:27.329025 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:18:37.328438 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:18:47.329156 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:18:50.031935 I | mvcc: store.index: compact 125322\n2021-05-24 20:18:50.627730 I | mvcc: finished scheduled compaction at 125322 (took 578.855986ms)\n2021-05-24 20:18:57.328782 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:19:07.329057 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:19:17.328573 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:19:27.328436 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:19:37.328572 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:19:47.328045 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:19:57.329035 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:20:07.328685 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:20:17.328197 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:20:27.328857 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:20:37.329033 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:20:47.328352 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:20:57.328683 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:21:07.328604 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:21:17.328570 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:21:27.328531 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:21:37.328511 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:21:47.328333 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:21:57.329256 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:22:07.328894 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:22:17.328834 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:22:27.328962 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:22:37.329074 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:22:47.328081 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:22:57.328717 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:23:07.328430 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:23:17.329087 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:23:27.328624 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:23:37.328641 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:23:47.329020 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:23:50.036632 I | mvcc: store.index: compact 128506\n2021-05-24 20:23:50.099289 I | mvcc: finished scheduled compaction at 128506 (took 60.132896ms)\n2021-05-24 20:23:57.328764 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:24:07.328817 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:24:17.328637 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:24:27.328732 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:24:37.328563 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:24:47.328636 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:24:57.329084 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:25:07.328413 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:25:17.328947 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:25:27.329087 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:25:37.328670 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:25:47.328993 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:25:57.328223 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:26:07.329188 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:26:17.329293 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:26:27.328839 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:26:37.328261 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:26:47.328648 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:26:57.328798 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:27:07.328856 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:27:17.329042 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:27:27.328290 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:27:37.328409 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:27:47.329151 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:27:57.328234 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:28:07.329031 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:28:17.329114 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:28:27.328987 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:28:37.328372 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:28:47.328441 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:28:50.040337 I | mvcc: store.index: compact 129251\n2021-05-24 20:28:50.056248 I | mvcc: finished scheduled compaction at 129251 (took 14.900999ms)\n2021-05-24 20:28:57.329147 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:29:07.328742 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:29:17.329159 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:29:27.328249 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:29:37.328358 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:29:47.328514 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:29:57.328305 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:30:07.328968 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:30:17.328841 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:30:27.328639 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:30:37.327938 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:30:47.328570 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:30:57.328610 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:31:07.328729 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:31:17.328163 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:31:27.329132 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:31:37.329097 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:31:47.329008 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:31:57.329130 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:32:07.328253 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:32:17.328926 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:32:27.329283 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:32:37.329130 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:32:47.329239 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:32:57.329048 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:33:07.328559 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:33:17.328964 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:33:27.328822 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:33:37.328938 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:33:47.328896 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:33:50.045945 I | mvcc: store.index: compact 129975\n2021-05-24 20:33:50.060896 I | mvcc: finished scheduled compaction at 129975 (took 14.222791ms)\n2021-05-24 20:33:57.328184 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:34:07.328976 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:34:17.328186 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:34:27.328056 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:34:37.328921 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:34:47.328621 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:34:57.328237 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:35:07.329121 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:35:17.328972 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:35:27.329022 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:35:37.328570 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:35:47.328655 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:35:57.328435 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:36:07.328467 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:36:17.328790 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:36:27.329124 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:36:37.328664 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:36:47.328973 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:36:57.329032 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:37:07.328674 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:37:17.329028 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:37:27.328616 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:37:37.329123 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:37:47.329137 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:37:57.328834 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:38:07.328329 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:38:17.329114 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:38:27.329167 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:38:37.329150 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:38:47.328988 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:38:50.051285 I | mvcc: store.index: compact 130692\n2021-05-24 20:38:50.066788 I | mvcc: finished scheduled compaction at 130692 (took 14.679719ms)\n2021-05-24 20:38:57.328572 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:39:07.328871 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:39:17.329039 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:39:27.329316 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:39:37.329226 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:39:47.328811 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:39:57.329075 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:40:07.328990 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:40:17.328770 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:40:27.328614 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:40:37.329034 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:40:47.328567 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:40:57.328582 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:41:07.328526 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:41:17.328891 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:41:27.329037 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:41:37.329046 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:41:47.329160 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:41:57.328763 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:42:07.328718 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:42:17.328444 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:42:27.329084 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:42:37.329037 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:42:47.328352 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:42:57.329127 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:43:07.329113 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:43:17.329098 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:43:27.328597 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:43:37.328390 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:43:47.328193 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:43:50.055287 I | mvcc: store.index: compact 131416\n2021-05-24 20:43:50.070705 I | mvcc: finished scheduled compaction at 131416 (took 14.074781ms)\n2021-05-24 20:43:57.328528 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:44:07.328425 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:44:17.329014 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:44:27.328088 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:44:37.328633 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:44:47.328441 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:44:57.329309 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:45:07.328985 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:45:17.328563 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:45:27.328175 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:45:37.328355 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:45:47.328834 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:45:57.328643 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:46:07.328825 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:46:17.328871 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:46:27.329064 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:46:37.328538 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:46:47.328287 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:46:57.329172 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:47:07.328971 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:47:17.328786 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:47:27.328646 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:47:37.329245 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:47:47.328795 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:47:57.329206 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:48:07.328893 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:48:17.328676 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:48:27.329178 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:48:37.328868 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:48:47.328429 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:48:50.060656 I | mvcc: store.index: compact 133110\n2021-05-24 20:48:50.092813 I | mvcc: finished scheduled compaction at 133110 (took 30.286934ms)\n2021-05-24 20:48:57.329014 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:49:07.328451 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:49:17.329129 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:49:27.328937 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:49:37.328807 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:49:47.329077 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:49:57.329074 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:50:07.328817 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:50:17.328835 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:50:18.105071 I | etcdserver: start to snapshot (applied: 140014, lastsnap: 130013)\n2021-05-24 20:50:18.107715 I | etcdserver: saved snapshot at index 140014\n2021-05-24 20:50:18.113558 I | etcdserver: compacted raft log at 135014\n2021-05-24 20:50:19.528606 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-0000000000015f99.snap successfully\n2021-05-24 20:50:27.328413 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:50:37.328679 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:50:47.328666 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:50:57.328112 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:51:07.328748 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:51:17.328978 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:51:27.329111 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:51:37.328620 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:51:47.328341 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:51:57.329223 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:52:07.328633 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:52:17.329061 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:52:27.328925 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:52:37.328284 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:52:47.328554 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:52:57.328554 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:53:07.328396 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:53:17.329036 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:53:27.328435 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:53:37.329172 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:53:47.328125 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:53:50.064637 I | mvcc: store.index: compact 134338\n2021-05-24 20:53:50.084900 I | mvcc: finished scheduled compaction at 134338 (took 16.257557ms)\n2021-05-24 20:53:57.329178 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:54:07.328832 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:54:17.328831 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:54:27.328630 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:54:37.329060 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:54:47.328667 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:54:57.328130 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:55:07.328381 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:55:11.906138 I | etcdserver: start to snapshot (applied: 150015, lastsnap: 140014)\n2021-05-24 20:55:11.908240 I | etcdserver: saved snapshot at index 150015\n2021-05-24 20:55:11.908769 I | etcdserver: compacted raft log at 145015\n2021-05-24 20:55:17.328210 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:55:19.531782 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-00000000000186aa.snap successfully\n2021-05-24 20:55:27.328977 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:55:37.328748 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:55:47.328928 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:55:57.329847 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:56:07.328961 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:56:17.329111 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:56:27.328723 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:56:37.328532 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:56:47.328540 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:56:57.328238 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:57:07.328235 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:57:17.328668 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:57:27.328829 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:57:37.328538 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:57:47.328694 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:57:57.328961 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:58:07.329061 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:58:17.328636 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:58:27.328791 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:58:37.328536 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:58:47.328959 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:58:50.069897 I | mvcc: store.index: compact 141833\n2021-05-24 20:58:50.196058 I | mvcc: finished scheduled compaction at 141833 (took 118.678101ms)\n2021-05-24 20:58:57.328972 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:59:07.329003 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:59:17.328273 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:59:27.328444 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:59:37.328491 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:59:47.328441 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 20:59:57.328971 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:00:07.328220 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:00:17.328541 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:00:27.328949 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:00:37.328174 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:00:47.328477 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:00:57.328831 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:01:07.328681 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:01:17.329059 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:01:27.328851 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:01:37.328334 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:01:47.329254 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:01:57.329041 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:02:07.328862 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:02:17.328078 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:02:27.328984 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:02:37.329008 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:02:47.328679 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:02:57.328795 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:03:07.328338 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:03:17.328826 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:03:27.328136 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:03:37.328366 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:03:47.328108 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:03:50.074782 I | mvcc: store.index: compact 148524\n2021-05-24 21:03:50.184861 I | mvcc: finished scheduled compaction at 148524 (took 104.502399ms)\n2021-05-24 21:03:57.328909 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:03:57.743129 I | wal: segmented wal file /var/lib/etcd/member/wal/0000000000000005-0000000000026737.wal is created\n2021-05-24 21:04:07.328438 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:04:17.328712 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:04:19.570211 I | pkg/fileutil: purged file /var/lib/etcd/member/wal/0000000000000000-0000000000000000.wal successfully\n2021-05-24 21:04:27.328964 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:04:37.328597 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:04:47.329220 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:04:57.328540 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:05:07.328658 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:05:17.328644 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:05:27.328784 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:05:37.329130 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:05:47.327927 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:05:57.329037 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:06:07.328419 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:06:17.328696 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:06:27.328086 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:06:37.329483 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:06:47.329213 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:06:57.328464 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:07:07.328180 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:07:17.328978 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:07:27.328461 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:07:37.328712 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:07:47.329085 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:07:57.328755 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:08:07.329106 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:08:17.328927 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:08:27.329171 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:08:37.328766 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:08:47.328059 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:08:50.080007 I | mvcc: store.index: compact 151692\n2021-05-24 21:08:50.143988 I | mvcc: finished scheduled compaction at 151692 (took 60.927478ms)\n2021-05-24 21:08:51.685664 I | etcdserver: start to snapshot (applied: 160016, lastsnap: 150015)\n2021-05-24 21:08:51.688074 I | etcdserver: saved snapshot at index 160016\n2021-05-24 21:08:51.688792 I | etcdserver: compacted raft log at 155016\n2021-05-24 21:08:57.328525 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:09:07.329067 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:09:17.328524 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:09:19.555114 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-000000000001adbb.snap successfully\n2021-05-24 21:09:27.328679 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:09:37.329122 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:09:47.328201 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:09:57.328532 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:10:07.328677 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:10:17.328478 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:10:27.328285 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:10:37.329137 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:10:47.329217 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:10:57.328605 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:11:07.328843 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:11:17.327977 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:11:27.329075 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:11:37.328995 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:11:47.328304 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:11:55.484127 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (103.09316ms) to execute\n2021-05-24 21:11:57.328978 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:12:07.328975 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:12:17.328201 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:12:27.329300 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:12:37.328720 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:12:47.328690 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:12:57.328063 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:13:07.328690 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:13:17.328598 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:13:27.328559 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:13:37.328426 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:13:47.328825 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:13:50.085456 I | mvcc: store.index: compact 154162\n2021-05-24 21:13:50.131859 I | mvcc: finished scheduled compaction at 154162 (took 44.040697ms)\n2021-05-24 21:13:57.328299 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:14:07.329127 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:14:17.328824 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:14:27.328701 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:14:37.329050 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:14:47.328136 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:14:57.328910 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:15:07.328806 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:15:17.329084 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:15:27.328301 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:15:37.328940 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:15:47.329159 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:15:57.328539 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:16:07.328106 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:16:17.328209 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:16:27.328232 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:16:37.328828 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:16:47.328492 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:16:57.328485 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:17:07.328478 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:17:17.329285 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:17:27.328108 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:17:37.328795 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:17:47.328083 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:17:57.328989 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:18:07.329037 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:18:17.328920 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:18:27.328638 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:18:37.328375 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:18:47.328808 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:18:50.090153 I | mvcc: store.index: compact 155474\n2021-05-24 21:18:50.106503 I | mvcc: finished scheduled compaction at 155474 (took 14.890963ms)\n2021-05-24 21:18:57.328422 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:19:07.328381 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:19:17.328423 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:19:27.328106 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:19:37.328735 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:19:47.328968 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:19:57.328644 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:20:07.328746 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:20:17.328524 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:20:27.328247 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:20:37.328287 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:20:47.328481 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:20:57.328972 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:21:07.329165 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:21:17.328940 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:21:27.329097 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:21:37.328852 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:21:47.328570 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:21:57.328920 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:22:07.328179 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:22:17.328762 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:22:27.328243 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:22:37.328621 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:22:47.328664 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:22:57.329196 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:23:07.328182 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:23:17.328858 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:23:27.328914 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:23:31.879065 W | etcdserver: read-only range request \"key:\\\"/registry/events/c-rally-c5f8df43-0xi6177p/rally-c5f8df43-slmugz2o-66d6489875-zhbnx.16821d9950e1791e\\\" \" with result \"range_response_count:1 size:914\" took too long (302.063739ms) to execute\n2021-05-24 21:23:31.879213 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (203.987366ms) to execute\n2021-05-24 21:23:37.329207 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:23:47.327921 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:23:50.095163 I | mvcc: store.index: compact 156749\n2021-05-24 21:23:50.124686 I | mvcc: finished scheduled compaction at 156749 (took 27.502314ms)\n2021-05-24 21:23:57.329198 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:24:07.328213 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:24:17.329224 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:24:27.329084 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:24:37.328448 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:24:47.329039 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:24:57.328636 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:25:07.329087 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:25:12.979196 W | etcdserver: read-only range request \"key:\\\"/registry/limitranges/c-rally-7ade4153-jmnju92l/\\\" range_end:\\\"/registry/limitranges/c-rally-7ade4153-jmnju92l0\\\" \" with result \"range_response_count:0 size:6\" took too long (201.733223ms) to execute\n2021-05-24 21:25:12.979627 W | etcdserver: read-only range request \"key:\\\"/registry/limitranges/c-rally-7ade4153-jmnju92l/\\\" range_end:\\\"/registry/limitranges/c-rally-7ade4153-jmnju92l0\\\" \" with result \"range_response_count:0 size:6\" took too long (202.000719ms) to execute\n2021-05-24 21:25:17.329224 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:25:27.328525 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:25:37.329194 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:25:47.328181 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:25:57.328537 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:26:07.328638 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:26:17.328465 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:26:27.329180 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:26:37.328088 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:26:47.329018 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:26:57.328984 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:27:07.328779 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:27:17.328797 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:27:27.329219 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:27:31.278581 W | etcdserver: read-only range request \"key:\\\"/registry/pods/c-rally-48ccb860-nkgy0cgh/rally-48ccb860-cvxikxjh-pjt4l\\\" \" with result \"range_response_count:1 size:2795\" took too long (113.729271ms) to execute\n2021-05-24 21:27:31.278871 W | etcdserver: request \"header: txn: success:> failure:<>>\" with result \"size:18\" took too long (113.035638ms) to execute\n2021-05-24 21:27:31.579273 W | etcdserver: request \"header: txn: success:> failure:<>>\" with result \"size:18\" took too long (198.925095ms) to execute\n2021-05-24 21:27:31.579518 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (209.548841ms) to execute\n2021-05-24 21:27:31.579634 W | etcdserver: read-only range request \"key:\\\"/registry/jobs/c-rally-48ccb860-r971r5bo/rally-48ccb860-1gzsg8ux\\\" \" with result \"range_response_count:1 size:1481\" took too long (142.377597ms) to execute\n2021-05-24 21:27:37.328914 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:27:47.328760 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:27:57.328499 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:28:07.328544 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:28:17.329185 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:28:27.328439 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:28:37.328726 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:28:47.328847 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:28:50.099472 I | mvcc: store.index: compact 159605\n2021-05-24 21:28:50.148188 I | mvcc: finished scheduled compaction at 159605 (took 45.281713ms)\n2021-05-24 21:28:57.328711 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:29:07.328829 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:29:17.328689 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:29:27.328782 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:29:37.328552 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:29:47.329138 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:29:57.328761 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:30:07.328336 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:30:17.328364 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:30:27.329111 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:30:37.328555 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:30:47.328671 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:30:57.328873 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:31:07.328638 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:31:13.177824 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (101.889761ms) to execute\n2021-05-24 21:31:17.327931 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:31:27.328512 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:31:37.327998 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:31:47.328291 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:31:57.328619 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:32:07.328419 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:32:10.557128 I | etcdserver: start to snapshot (applied: 170017, lastsnap: 160016)\n2021-05-24 21:32:10.559316 I | etcdserver: saved snapshot at index 170017\n2021-05-24 21:32:10.560114 I | etcdserver: compacted raft log at 165017\n2021-05-24 21:32:17.328853 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:32:19.578673 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-000000000001d4cc.snap successfully\n2021-05-24 21:32:27.328817 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:32:37.328333 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:32:47.328721 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:32:57.329083 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:33:07.328871 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:33:17.329024 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:33:27.328861 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:33:37.328186 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:33:47.329128 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:33:50.104013 I | mvcc: store.index: compact 162977\n2021-05-24 21:33:50.162301 I | mvcc: finished scheduled compaction at 162977 (took 56.827369ms)\n2021-05-24 21:33:57.329165 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:34:07.328544 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:34:17.328803 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:34:27.328291 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:34:37.329308 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:34:47.329019 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:34:57.328751 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:35:07.327945 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:35:17.328954 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:35:27.328226 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:35:37.329302 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:35:47.328725 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:35:57.328928 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:36:07.328396 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:36:17.328116 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:36:27.328854 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:36:37.328856 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:36:47.328111 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:36:57.328357 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:37:07.328885 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:37:17.328881 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:37:27.327937 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:37:37.329177 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:37:47.329159 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:37:57.328324 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:38:07.328506 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:38:17.328657 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:38:27.328652 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:38:37.328697 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:38:47.328825 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:38:50.107543 I | mvcc: store.index: compact 163863\n2021-05-24 21:38:50.122857 I | mvcc: finished scheduled compaction at 163863 (took 14.54241ms)\n2021-05-24 21:38:57.328436 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:39:07.328434 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:39:17.329097 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:39:27.328634 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:39:37.328865 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:39:47.329055 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:39:57.328808 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:40:07.328096 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:40:17.328259 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:40:27.328124 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:40:37.328436 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:40:47.329041 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:40:57.328504 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:41:07.328231 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:41:17.328460 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:41:27.329210 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:41:37.328408 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:41:47.328421 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:41:57.328688 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:42:07.328565 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:42:17.328397 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:42:27.328452 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:42:37.328955 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:42:47.328964 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:42:57.328765 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:43:07.328812 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:43:17.328891 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:43:27.328739 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:43:37.329127 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:43:47.328546 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:43:50.112202 I | mvcc: store.index: compact 164692\n2021-05-24 21:43:50.127616 I | mvcc: finished scheduled compaction at 164692 (took 14.250178ms)\n2021-05-24 21:43:57.328485 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:44:07.328212 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:44:17.328495 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:44:27.328904 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:44:37.328625 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:44:47.328437 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:44:57.328080 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:45:07.328437 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:45:17.329199 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:45:27.328927 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:45:37.328828 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:45:47.328464 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:45:57.328706 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:46:07.328455 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:46:17.328809 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:46:27.328610 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:46:37.328018 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:46:47.329404 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:46:57.328952 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:47:07.328494 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:47:17.328813 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:47:27.328948 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:47:37.328698 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:47:47.328229 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:47:57.328682 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:48:07.328548 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:48:17.328802 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:48:27.329218 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:48:37.328176 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:48:47.328304 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:48:50.116822 I | mvcc: store.index: compact 166028\n2021-05-24 21:48:50.133127 I | mvcc: finished scheduled compaction at 166028 (took 15.121781ms)\n2021-05-24 21:48:57.329067 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:49:07.328190 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:49:17.328708 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:49:27.329079 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:49:37.328695 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:49:47.328992 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:49:57.328221 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:50:07.329009 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:50:10.978101 W | etcdserver: read-only range request \"key:\\\"/registry/ingress/\\\" range_end:\\\"/registry/ingress0\\\" count_only:true \" with result \"range_response_count:0 size:6\" took too long (109.387895ms) to execute\n2021-05-24 21:50:17.328616 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:50:27.328938 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:50:37.328470 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:50:47.328013 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:50:57.329167 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:51:07.329128 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:51:17.328858 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:51:27.328906 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:51:37.328210 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:51:47.329014 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:51:57.329201 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:52:07.329252 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:52:17.329049 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:52:27.328407 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:52:37.328100 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:52:42.076696 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:22464\" took too long (159.86007ms) to execute\n2021-05-24 21:52:47.329238 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:52:57.328553 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:53:07.328868 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:53:17.328430 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:53:27.328026 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:53:37.328728 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:53:47.328469 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:53:50.121696 I | mvcc: store.index: compact 167085\n2021-05-24 21:53:50.140295 I | mvcc: finished scheduled compaction at 167085 (took 16.975739ms)\n2021-05-24 21:53:57.328652 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:54:07.328568 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:54:17.328582 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:54:27.328461 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:54:37.328589 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:54:47.328296 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:54:57.329167 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:55:07.328060 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:55:17.328720 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:55:27.328114 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:55:37.328110 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:55:41.876389 W | etcdserver: read-only range request \"key:\\\"/registry/pods/c-rally-e4c83d6d-vaj5fjmd/rally-e4c83d6d-ts3z6fkz\\\" \" with result \"range_response_count:1 size:2982\" took too long (160.074723ms) to execute\n2021-05-24 21:55:41.876520 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (204.134686ms) to execute\n2021-05-24 21:55:42.083244 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (106.690385ms) to execute\n2021-05-24 21:55:42.377119 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:22464\" took too long (277.81193ms) to execute\n2021-05-24 21:55:42.377276 W | etcdserver: read-only range request \"key:\\\"/registry/poddisruptionbudgets/\\\" range_end:\\\"/registry/poddisruptionbudgets0\\\" count_only:true \" with result \"range_response_count:0 size:6\" took too long (272.380282ms) to execute\n2021-05-24 21:55:47.328369 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:55:57.328190 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:56:07.328487 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:56:17.328590 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:56:27.328018 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:56:37.328548 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:56:47.327992 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:56:57.328340 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:57:07.328104 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:57:17.328864 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:57:27.329175 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:57:37.329100 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:57:47.328347 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:57:57.329047 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:58:07.329475 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:58:17.328019 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:58:27.329117 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:58:37.328605 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:58:47.328202 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:58:50.125389 I | mvcc: store.index: compact 168378\n2021-05-24 21:58:50.156386 I | mvcc: finished scheduled compaction at 168378 (took 28.933792ms)\n2021-05-24 21:58:57.328070 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:59:07.328589 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:59:17.328550 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:59:27.329191 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:59:34.775990 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (101.28595ms) to execute\n2021-05-24 21:59:34.776115 W | etcdserver: read-only range request \"key:\\\"/registry/csinodes/\\\" range_end:\\\"/registry/csinodes0\\\" count_only:true \" with result \"range_response_count:0 size:8\" took too long (194.142309ms) to execute\n2021-05-24 21:59:36.376549 W | etcdserver: read-only range request \"key:\\\"/registry/networkpolicies/\\\" range_end:\\\"/registry/networkpolicies0\\\" count_only:true \" with result \"range_response_count:0 size:6\" took too long (164.432383ms) to execute\n2021-05-24 21:59:36.676446 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (199.805818ms) to execute\n2021-05-24 21:59:36.676856 W | etcdserver: read-only range request \"key:\\\"/registry/pods/c-rally-c7d1b019-m9adpgmh/rally-c7d1b019-ltvlrf5r\\\" \" with result \"range_response_count:1 size:3364\" took too long (275.316961ms) to execute\n2021-05-24 21:59:36.877221 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (100.29294ms) to execute\n2021-05-24 21:59:37.077184 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/default/kubernetes\\\" \" with result \"range_response_count:1 size:419\" took too long (197.843581ms) to execute\n2021-05-24 21:59:37.328408 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:59:47.328368 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 21:59:57.328540 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:00:06.978499 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (201.731125ms) to execute\n2021-05-24 22:00:06.978801 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (107.660749ms) to execute\n2021-05-24 22:00:06.978839 W | etcdserver: read-only range request \"key:\\\"/registry/pods/c-rally-c7d1b019-m9adpgmh/rally-c7d1b019-ltvlrf5r\\\" \" with result \"range_response_count:1 size:3376\" took too long (152.099424ms) to execute\n2021-05-24 22:00:06.978895 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:645\" took too long (108.786688ms) to execute\n2021-05-24 22:00:06.978985 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (114.629434ms) to execute\n2021-05-24 22:00:06.979096 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/default\\\" \" with result \"range_response_count:1 size:342\" took too long (267.063042ms) to execute\n2021-05-24 22:00:07.277620 W | etcdserver: read-only range request \"key:\\\"/registry/masterleases/172.18.0.3\\\" \" with result \"range_response_count:1 size:133\" took too long (294.054622ms) to execute\n2021-05-24 22:00:07.277857 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (101.552851ms) to execute\n2021-05-24 22:00:07.376089 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:00:17.327969 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:00:27.328872 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:00:37.329072 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:00:47.328463 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:00:57.328687 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:01:07.328976 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:01:17.328208 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:01:27.328972 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:01:35.977044 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (301.230448ms) to execute\n2021-05-24 22:01:35.977556 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/\\\" range_end:\\\"/registry/configmaps0\\\" count_only:true \" with result \"range_response_count:0 size:8\" took too long (376.178735ms) to execute\n2021-05-24 22:01:35.977602 W | etcdserver: read-only range request \"key:\\\"/registry/pods/c-rally-683b498a-8woidea6/rally-683b498a-ra8hiosj\\\" \" with result \"range_response_count:1 size:3416\" took too long (407.908627ms) to execute\n2021-05-24 22:01:35.977625 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (227.897505ms) to execute\n2021-05-24 22:01:35.977717 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (302.72324ms) to execute\n2021-05-24 22:01:37.328056 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:01:47.329368 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:01:57.328468 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:02:07.329020 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:02:17.328887 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:02:27.329018 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:02:37.328792 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:02:47.328959 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:02:57.328650 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:03:07.277935 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/c-rally-a72779f8-9pcwtlh3\\\" \" with result \"range_response_count:1 size:1870\" took too long (157.636993ms) to execute\n2021-05-24 22:03:07.376077 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:03:17.328632 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:03:27.328990 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:03:37.328042 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:03:47.328394 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:03:50.129493 I | mvcc: store.index: compact 169836\n2021-05-24 22:03:50.159892 I | mvcc: finished scheduled compaction at 169836 (took 29.10061ms)\n2021-05-24 22:03:57.328212 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:04:00.079286 W | etcdserver: read-only range request \"key:\\\"/registry/priorityclasses/\\\" range_end:\\\"/registry/priorityclasses0\\\" count_only:true \" with result \"range_response_count:0 size:8\" took too long (100.394657ms) to execute\n2021-05-24 22:04:00.378606 W | etcdserver: read-only range request \"key:\\\"/registry/limitranges/c-rally-c6d40158-50nd2qs0/\\\" range_end:\\\"/registry/limitranges/c-rally-c6d40158-50nd2qs00\\\" \" with result \"range_response_count:0 size:6\" took too long (277.870618ms) to execute\n2021-05-24 22:04:01.079680 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:22464\" took too long (155.172015ms) to execute\n2021-05-24 22:04:01.079771 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:645\" took too long (109.160437ms) to execute\n2021-05-24 22:04:01.079814 W | etcdserver: read-only range request \"key:\\\"/registry/pods/c-rally-c6d40158-50nd2qs0/rally-c6d40158-1pf8etas-0\\\" \" with result \"range_response_count:1 size:2912\" took too long (100.232935ms) to execute\n2021-05-24 22:04:07.328392 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:04:17.328725 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:04:27.328046 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:04:37.328639 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:04:42.077940 W | etcdserver: read-only range request \"key:\\\"/registry/jobs/c-rally-68c0781d-sdhq2zyd/rally-68c0781d-1s0irhcb\\\" \" with result \"range_response_count:1 size:1475\" took too long (181.214785ms) to execute\n2021-05-24 22:04:42.777996 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (103.742345ms) to execute\n2021-05-24 22:04:42.778140 W | etcdserver: read-only range request \"key:\\\"/registry/pods/c-rally-68c0781d-sdhq2zyd/rally-68c0781d-1s0irhcb-lj6bm\\\" \" with result \"range_response_count:1 size:3356\" took too long (243.45802ms) to execute\n2021-05-24 22:04:47.328429 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:04:57.329144 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:05:07.328450 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:05:17.328451 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:05:27.328614 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:05:37.329121 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:05:47.328665 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:05:57.328491 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:06:07.328757 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:06:17.328379 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:06:27.328127 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:06:37.328874 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:06:47.328662 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:06:57.328377 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:07:07.328735 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:07:17.328792 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:07:27.329186 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:07:37.328930 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:07:47.328464 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:07:57.328980 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:08:07.328804 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:08:12.015511 I | etcdserver: start to snapshot (applied: 180018, lastsnap: 170017)\n2021-05-24 22:08:12.017757 I | etcdserver: saved snapshot at index 180018\n2021-05-24 22:08:12.018635 I | etcdserver: compacted raft log at 175018\n2021-05-24 22:08:17.328128 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:08:19.612287 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-000000000001fbdd.snap successfully\n2021-05-24 22:08:26.277558 W | etcdserver: read-only range request \"key:\\\"/registry/events/dns-2620/dns-test-805a9669-fc03-40e9-b9e6-f72149ed0dfc.16822015c79ca4ac\\\" \" with result \"range_response_count:1 size:860\" took too long (100.091322ms) to execute\n2021-05-24 22:08:27.328482 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:08:31.076358 W | etcdserver: read-only range request \"key:\\\"/registry/pods/e2e-kubelet-etc-hosts-7732/test-host-network-pod\\\" \" with result \"range_response_count:1 size:3855\" took too long (159.902051ms) to execute\n2021-05-24 22:08:31.278005 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4152/test-rollover-controller-ws9qb\\\" \" with result \"range_response_count:1 size:3226\" took too long (191.612679ms) to execute\n2021-05-24 22:08:31.278094 W | etcdserver: request \"header: txn: success:> failure:<>>\" with result \"size:18\" took too long (100.786932ms) to execute\n2021-05-24 22:08:31.278524 W | etcdserver: read-only range request \"key:\\\"/registry/pods/configmap-6678/pod-configmaps-0cdb98d4-017a-46e7-87a2-0764e2808c25\\\" \" with result \"range_response_count:1 size:3301\" took too long (177.900282ms) to execute\n2021-05-24 22:08:31.776563 W | etcdserver: read-only range request \"key:\\\"/registry/pods/configmap-6678/pod-configmaps-0cdb98d4-017a-46e7-87a2-0764e2808c25\\\" \" with result \"range_response_count:1 size:3584\" took too long (151.754275ms) to execute\n2021-05-24 22:08:31.776621 W | etcdserver: read-only range request \"key:\\\"/registry/secrets/pods-3706/\\\" range_end:\\\"/registry/secrets/pods-37060\\\" \" with result \"range_response_count:0 size:6\" took too long (169.25965ms) to execute\n2021-05-24 22:08:31.776719 W | etcdserver: read-only range request \"key:\\\"/registry/pods/configmap-6678/pod-configmaps-0cdb98d4-017a-46e7-87a2-0764e2808c25\\\" \" with result \"range_response_count:1 size:3584\" took too long (151.84993ms) to execute\n2021-05-24 22:08:31.776827 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (104.327783ms) to execute\n2021-05-24 22:08:31.978404 W | etcdserver: read-only range request \"key:\\\"/registry/csistoragecapacities/events-5732/\\\" range_end:\\\"/registry/csistoragecapacities/events-57320\\\" \" with result \"range_response_count:0 size:6\" took too long (195.102352ms) to execute\n2021-05-24 22:08:31.978643 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (101.201117ms) to execute\n2021-05-24 22:08:31.984790 W | etcdserver: read-only range request \"key:\\\"/registry/pods/configmap-6678/pod-configmaps-0cdb98d4-017a-46e7-87a2-0764e2808c25\\\" \" with result \"range_response_count:1 size:3584\" took too long (152.229949ms) to execute\n2021-05-24 22:08:31.985408 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/dns-8186\\\" \" with result \"range_response_count:1 size:464\" took too long (128.998154ms) to execute\n2021-05-24 22:08:31.985658 W | etcdserver: read-only range request \"key:\\\"/registry/pods/configmap-6678/pod-configmaps-0cdb98d4-017a-46e7-87a2-0764e2808c25\\\" \" with result \"range_response_count:1 size:3584\" took too long (199.40333ms) to execute\n2021-05-24 22:08:31.987199 W | etcdserver: read-only range request \"key:\\\"/registry/replicasets/deployment-4152/test-rollover-controller\\\" \" with result \"range_response_count:1 size:1742\" took too long (202.063319ms) to execute\n2021-05-24 22:08:31.987264 W | etcdserver: read-only range request \"key:\\\"/registry/pods/configmap-6678/pod-configmaps-0cdb98d4-017a-46e7-87a2-0764e2808c25\\\" \" with result \"range_response_count:1 size:3584\" took too long (201.57051ms) to execute\n2021-05-24 22:08:31.987363 W | etcdserver: read-only range request \"key:\\\"/registry/cronjobs/pods-3706/\\\" range_end:\\\"/registry/cronjobs/pods-37060\\\" \" with result \"range_response_count:0 size:6\" took too long (203.966646ms) to execute\n2021-05-24 22:08:32.277309 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4152/test-rollover-controller-ws9qb\\\" \" with result \"range_response_count:0 size:6\" took too long (244.18545ms) to execute\n2021-05-24 22:08:32.277400 W | etcdserver: read-only range request \"key:\\\"/registry/poddisruptionbudgets/dns-8186/\\\" range_end:\\\"/registry/poddisruptionbudgets/dns-81860\\\" \" with result \"range_response_count:0 size:6\" took too long (277.305563ms) to execute\n2021-05-24 22:08:32.277425 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:22464\" took too long (275.244892ms) to execute\n2021-05-24 22:08:32.277474 W | etcdserver: read-only range request \"key:\\\"/registry/pods/configmap-6678/pod-configmaps-0cdb98d4-017a-46e7-87a2-0764e2808c25\\\" \" with result \"range_response_count:0 size:6\" took too long (272.634034ms) to execute\n2021-05-24 22:08:32.277523 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/projected-6800/\\\" range_end:\\\"/registry/resourcequotas/projected-68000\\\" \" with result \"range_response_count:0 size:6\" took too long (272.641853ms) to execute\n2021-05-24 22:08:32.277572 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4152/test-rollover-controller-ws9qb\\\" \" with result \"range_response_count:0 size:6\" took too long (242.81512ms) to execute\n2021-05-24 22:08:32.277696 W | etcdserver: read-only range request \"key:\\\"/registry/projectcontour.io/extensionservices/pods-3706/\\\" range_end:\\\"/registry/projectcontour.io/extensionservices/pods-37060\\\" \" with result \"range_response_count:0 size:6\" took too long (276.619386ms) to execute\n2021-05-24 22:08:32.277827 W | etcdserver: read-only range request \"key:\\\"/registry/pods/configmap-6678/\\\" range_end:\\\"/registry/pods/configmap-66780\\\" \" with result \"range_response_count:0 size:6\" took too long (196.180615ms) to execute\n2021-05-24 22:08:32.277936 W | etcdserver: read-only range request \"key:\\\"/registry/events/events-5732/\\\" range_end:\\\"/registry/events/events-57320\\\" \" with result \"range_response_count:0 size:6\" took too long (196.236103ms) to execute\n2021-05-24 22:08:32.278008 W | etcdserver: read-only range request \"key:\\\"/registry/pods/configmap-6678/pod-configmaps-0cdb98d4-017a-46e7-87a2-0764e2808c25\\\" \" with result \"range_response_count:0 size:6\" took too long (163.35798ms) to execute\n2021-05-24 22:08:32.278131 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (156.050672ms) to execute\n2021-05-24 22:08:35.178932 W | etcdserver: read-only range request \"key:\\\"/registry/events/dns-3044/dns-test-9d821a03-d598-4ae2-8879-4d6748f0635a.1682201a4860fc3f\\\" \" with result \"range_response_count:0 size:6\" took too long (141.719504ms) to execute\n2021-05-24 22:08:35.178980 W | etcdserver: read-only range request \"key:\\\"/registry/pods/events-5732/send-events-de825cb4-0233-4d36-8147-db59da1a8e22\\\" \" with result \"range_response_count:1 size:3068\" took too long (100.530876ms) to execute\n2021-05-24 22:08:35.179168 W | etcdserver: read-only range request \"key:\\\"/registry/deployments/deployment-5220/test-cleanup-deployment\\\" \" with result \"range_response_count:1 size:2104\" took too long (170.975646ms) to execute\n2021-05-24 22:08:35.677289 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:493\" took too long (184.218136ms) to execute\n2021-05-24 22:08:35.677409 W | etcdserver: read-only range request \"key:\\\"/registry/pods/events-5732/send-events-de825cb4-0233-4d36-8147-db59da1a8e22\\\" \" with result \"range_response_count:1 size:3068\" took too long (108.063972ms) to execute\n2021-05-24 22:08:35.677433 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (184.751018ms) to execute\n2021-05-24 22:08:35.677486 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:645\" took too long (187.192681ms) to execute\n2021-05-24 22:08:35.677545 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-runtime-6611/termination-message-container1f5f17f9-205d-409c-ac03-559500fd2028\\\" \" with result \"range_response_count:1 size:2966\" took too long (178.335408ms) to execute\n2021-05-24 22:08:35.677704 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:520\" took too long (184.720304ms) to execute\n2021-05-24 22:08:35.677792 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-runtime-6611/termination-message-container1f5f17f9-205d-409c-ac03-559500fd2028\\\" \" with result \"range_response_count:1 size:2966\" took too long (113.489031ms) to execute\n2021-05-24 22:08:35.677949 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kube-system/kube-controller-manager-v1.21-control-plane\\\" \" with result \"range_response_count:1 size:6752\" took too long (170.781765ms) to execute\n2021-05-24 22:08:35.678069 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/deployment-5220\\\" \" with result \"range_response_count:1 size:1898\" took too long (196.686237ms) to execute\n2021-05-24 22:08:36.076269 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-5220/test-cleanup-controller-s4kj7\\\" \" with result \"range_response_count:0 size:6\" took too long (195.830852ms) to execute\n2021-05-24 22:08:36.076411 W | etcdserver: request \"header: txn: success:> failure:<>>\" with result \"size:18\" took too long (186.342507ms) to execute\n2021-05-24 22:08:36.076457 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-probe-9840/test-webserver-9bdb94ff-eeca-439a-9c83-b02f4c6b9308\\\" \" with result \"range_response_count:1 size:1703\" took too long (187.693159ms) to execute\n2021-05-24 22:08:36.076598 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-5220/test-cleanup-controller-s4kj7\\\" \" with result \"range_response_count:0 size:6\" took too long (196.68833ms) to execute\n2021-05-24 22:08:36.076836 W | etcdserver: read-only range request \"key:\\\"/registry/pods/statefulset-4612/\\\" range_end:\\\"/registry/pods/statefulset-46120\\\" \" with result \"range_response_count:1 size:3744\" took too long (175.211175ms) to execute\n2021-05-24 22:08:36.280224 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (103.92327ms) to execute\n2021-05-24 22:08:36.280801 W | etcdserver: read-only range request \"key:\\\"/registry/statefulsets/statefulset-4612/ss\\\" \" with result \"range_response_count:1 size:1588\" took too long (200.514318ms) to execute\n2021-05-24 22:08:36.280904 W | etcdserver: read-only range request \"key:\\\"/registry/pods/services-5050/affinity-clusterip-p7kb4\\\" \" with result \"range_response_count:1 size:3402\" took too long (165.107032ms) to execute\n2021-05-24 22:08:36.280966 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-5220/test-cleanup-controller-s4kj7\\\" \" with result \"range_response_count:0 size:6\" took too long (199.167723ms) to execute\n2021-05-24 22:08:36.281004 W | etcdserver: read-only range request \"key:\\\"/registry/apiregistration.k8s.io/apiservices/\\\" range_end:\\\"/registry/apiregistration.k8s.io/apiservices0\\\" count_only:true \" with result \"range_response_count:0 size:8\" took too long (368.750262ms) to execute\n2021-05-24 22:08:36.579819 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/deployment-5220\\\" \" with result \"range_response_count:0 size:6\" took too long (213.906082ms) to execute\n2021-05-24 22:08:36.579929 W | etcdserver: request \"header: txn: success:> failure:<>>\" with result \"size:18\" took too long (103.852083ms) to execute\n2021-05-24 22:08:36.580227 W | etcdserver: read-only range request \"key:\\\"/registry/endpointslices/services-5050/affinity-clusterip-22rbp\\\" \" with result \"range_response_count:1 size:1086\" took too long (197.623703ms) to execute\n2021-05-24 22:08:36.580266 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-probe-8132/busybox-00371398-0625-4ee3-9828-6d6c76351fbf\\\" \" with result \"range_response_count:1 size:3205\" took too long (120.012402ms) to execute\n2021-05-24 22:08:36.580301 W | etcdserver: read-only range request \"key:\\\"/registry/pods/projected-1429/pod-projected-secrets-1df75b41-4cc5-4fd8-9b2b-cf98f23080ae\\\" \" with result \"range_response_count:1 size:5824\" took too long (104.009813ms) to execute\n2021-05-24 22:08:37.328183 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:08:45.680755 W | etcdserver: request \"header: txn: success: > failure: >>\" with result \"size:20\" took too long (100.891621ms) to execute\n2021-05-24 22:08:47.328481 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:08:50.132840 I | mvcc: store.index: compact 170910\n2021-05-24 22:08:50.149477 I | mvcc: finished scheduled compaction at 170910 (took 15.087889ms)\n2021-05-24 22:08:57.328505 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:09:07.329184 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:09:17.328963 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:09:27.329081 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:09:37.328799 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:09:47.176742 W | etcdserver: read-only range request \"key:\\\"/registry/pods/endpointslice-6751/pod2\\\" \" with result \"range_response_count:1 size:3152\" took too long (268.339177ms) to execute\n2021-05-24 22:09:47.176805 W | etcdserver: read-only range request \"key:\\\"/registry/pods/endpointslice-6751/pod2\\\" \" with result \"range_response_count:1 size:3152\" took too long (104.133094ms) to execute\n2021-05-24 22:09:47.176850 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/statefulset-7756/default\\\" \" with result \"range_response_count:1 size:230\" took too long (274.966027ms) to execute\n2021-05-24 22:09:47.176887 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-lifecycle-hook-2927/pod-with-poststart-exec-hook\\\" \" with result \"range_response_count:1 size:3363\" took too long (101.793586ms) to execute\n2021-05-24 22:09:47.176922 W | etcdserver: read-only range request \"key:\\\"/registry/pods/endpointslice-6751/pod2\\\" \" with result \"range_response_count:1 size:3152\" took too long (283.027579ms) to execute\n2021-05-24 22:09:47.177103 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-lifecycle-hook-2927/pod-with-poststart-exec-hook\\\" \" with result \"range_response_count:1 size:3363\" took too long (273.519595ms) to execute\n2021-05-24 22:09:47.177279 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/webhook-4927\\\" \" with result \"range_response_count:1 size:523\" took too long (252.498315ms) to execute\n2021-05-24 22:09:47.177368 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/webhook-4927-markers\\\" \" with result \"range_response_count:1 size:451\" took too long (248.670886ms) to execute\n2021-05-24 22:09:47.376725 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:09:57.328120 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:10:07.328608 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:10:17.328185 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:10:17.579027 W | etcdserver: read-only range request \"key:\\\"/registry/mutatingwebhookconfigurations/\\\" range_end:\\\"/registry/mutatingwebhookconfigurations0\\\" count_only:true \" with result \"range_response_count:0 size:6\" took too long (125.4853ms) to execute\n2021-05-24 22:10:17.675849 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-5077/pod-24484670-c205-4664-8a6b-0b72da3af182\\\" \" with result \"range_response_count:0 size:6\" took too long (199.301234ms) to execute\n2021-05-24 22:10:18.476460 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (196.084631ms) to execute\n2021-05-24 22:10:18.678125 W | etcdserver: read-only range request \"key:\\\"/registry/endpointslices/services-3611/externalsvc-q7bzt\\\" \" with result \"range_response_count:1 size:896\" took too long (194.333439ms) to execute\n2021-05-24 22:10:18.679062 W | etcdserver: read-only range request \"key:\\\"/registry/services/specs/services-3611/externalsvc\\\" \" with result \"range_response_count:1 size:563\" took too long (126.067089ms) to execute\n2021-05-24 22:10:18.679992 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:520\" took too long (190.182749ms) to execute\n2021-05-24 22:10:18.680409 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:645\" took too long (185.688976ms) to execute\n2021-05-24 22:10:18.976384 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (198.949186ms) to execute\n2021-05-24 22:10:18.976865 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-probe-9840/test-webserver-9bdb94ff-eeca-439a-9c83-b02f4c6b9308\\\" \" with result \"range_response_count:1 size:3267\" took too long (239.16047ms) to execute\n2021-05-24 22:10:18.976900 W | etcdserver: read-only range request \"key:\\\"/registry/endpointslices/services-3611/externalsvc-q7bzt\\\" \" with result \"range_response_count:1 size:896\" took too long (195.955967ms) to execute\n2021-05-24 22:10:18.976966 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/services-3611/externalsvc\\\" \" with result \"range_response_count:1 size:194\" took too long (196.292416ms) to execute\n2021-05-24 22:10:18.977009 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/services-3611/externalsvc\\\" \" with result \"range_response_count:1 size:194\" took too long (198.777116ms) to execute\n2021-05-24 22:10:18.977041 W | etcdserver: read-only range request \"key:\\\"/registry/endpointslices/services-3611/externalsvc-q7bzt\\\" \" with result \"range_response_count:1 size:896\" took too long (290.557604ms) to execute\n2021-05-24 22:10:19.080662 W | etcdserver: read-only range request \"key:\\\"/registry/ranges/serviceips\\\" \" with result \"range_response_count:1 size:7939\" took too long (101.239175ms) to execute\n2021-05-24 22:10:27.179042 W | etcdserver: read-only range request \"key:\\\"/registry/statefulsets/resourcequota-9164/\\\" range_end:\\\"/registry/statefulsets/resourcequota-91640\\\" \" with result \"range_response_count:0 size:6\" took too long (192.780736ms) to execute\n2021-05-24 22:10:27.179190 W | etcdserver: read-only range request \"key:\\\"/registry/secrets/events-2033/\\\" range_end:\\\"/registry/secrets/events-20330\\\" \" with result \"range_response_count:0 size:6\" took too long (192.211113ms) to execute\n2021-05-24 22:10:27.328076 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:10:32.377873 W | etcdserver: read-only range request \"key:\\\"/registry/crd-publish-openapi-test-unknown-in-nested.example.com/e2e-test-crd-publish-openapi-312-crds/crd-publish-openapi-8677/test-cr\\\" \" with result \"range_response_count:0 size:6\" took too long (139.795026ms) to execute\n2021-05-24 22:10:32.577191 W | etcdserver: read-only range request \"key:\\\"/registry/projectcontour.io/httpproxies/emptydir-5039/\\\" range_end:\\\"/registry/projectcontour.io/httpproxies/emptydir-50390\\\" \" with result \"range_response_count:0 size:6\" took too long (185.755822ms) to execute\n2021-05-24 22:10:32.886664 W | etcdserver: read-only range request \"key:\\\"/registry/secrets/emptydir-5039/\\\" range_end:\\\"/registry/secrets/emptydir-50390\\\" \" with result \"range_response_count:0 size:6\" took too long (101.618533ms) to execute\n2021-05-24 22:10:32.886815 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/emptydir-5039/default\\\" \" with result \"range_response_count:1 size:224\" took too long (101.85745ms) to execute\n2021-05-24 22:10:37.329127 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:10:47.329146 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:10:57.328885 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:11:07.328746 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:11:08.288245 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubectl-5521/update-demo-nautilus-dwwbd\\\" \" with result \"range_response_count:1 size:1808\" took too long (104.833803ms) to execute\n2021-05-24 22:11:09.078126 W | etcdserver: read-only range request \"key:\\\"/registry/statefulsets/configmap-1561/\\\" range_end:\\\"/registry/statefulsets/configmap-15610\\\" \" with result \"range_response_count:0 size:6\" took too long (296.483154ms) to execute\n2021-05-24 22:11:09.078286 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (102.355159ms) to execute\n2021-05-24 22:11:09.078477 W | etcdserver: read-only range request \"key:\\\"/registry/events/projected-2412/pod-projected-configmaps-d5205c63-4eea-4c63-af11-40cd2a016bec.1682203cfbc7fc8d\\\" \" with result \"range_response_count:1 size:980\" took too long (295.532519ms) to execute\n2021-05-24 22:11:09.078607 W | etcdserver: read-only range request \"key:\\\"/registry/events/statefulset-6739/ss-1.1682203d6482a158\\\" \" with result \"range_response_count:1 size:787\" took too long (215.708941ms) to execute\n2021-05-24 22:11:09.079337 W | etcdserver: read-only range request \"key:\\\"/registry/events/\\\" range_end:\\\"/registry/events0\\\" count_only:true \" with result \"range_response_count:0 size:9\" took too long (156.117282ms) to execute\n2021-05-24 22:11:17.328955 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:11:27.328805 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:11:37.329079 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:11:39.677177 W | etcdserver: read-only range request \"key:\\\"/registry/pods/pods-7405/pod-submit-remove-70532608-d4e9-4bb2-9661-cf607f2e74c4\\\" \" with result \"range_response_count:1 size:3210\" took too long (141.316163ms) to execute\n2021-05-24 22:11:39.677347 W | etcdserver: read-only range request \"key:\\\"/registry/pods/job-7996/foo-9zljg\\\" \" with result \"range_response_count:1 size:2760\" took too long (159.986709ms) to execute\n2021-05-24 22:11:39.677523 W | etcdserver: read-only range request \"key:\\\"/registry/pods/pod-network-test-5746/netserver-0\\\" \" with result \"range_response_count:1 size:4025\" took too long (189.374462ms) to execute\n2021-05-24 22:11:39.677541 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (117.981365ms) to execute\n2021-05-24 22:11:39.677632 W | etcdserver: read-only range request \"key:\\\"/registry/pods/replicaset-2637/pod-adoption-release\\\" \" with result \"range_response_count:1 size:2906\" took too long (145.17588ms) to execute\n2021-05-24 22:11:39.677741 W | etcdserver: read-only range request \"key:\\\"/registry/pods/job-7996/foo-6d5vq\\\" \" with result \"range_response_count:1 size:2761\" took too long (163.944561ms) to execute\n2021-05-24 22:11:39.677854 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/resourcequota-9912/test-quota\\\" \" with result \"range_response_count:1 size:3247\" took too long (111.812628ms) to execute\n2021-05-24 22:11:39.677954 W | etcdserver: read-only range request \"key:\\\"/registry/pods/pods-7405/pod-submit-remove-70532608-d4e9-4bb2-9661-cf607f2e74c4\\\" \" with result \"range_response_count:1 size:3210\" took too long (164.971066ms) to execute\n2021-05-24 22:11:39.779831 W | etcdserver: request \"header: txn: success: > failure: >>\" with result \"size:20\" took too long (100.883254ms) to execute\n2021-05-24 22:11:40.079174 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/resourcequota-9912\\\" \" with result \"range_response_count:1 size:489\" took too long (293.33748ms) to execute\n2021-05-24 22:11:40.079317 W | etcdserver: request \"header: txn: success:> failure:<>>\" with result \"size:18\" took too long (103.031932ms) to execute\n2021-05-24 22:11:40.079669 W | etcdserver: read-only range request \"key:\\\"/registry/pods/job-7996/foo-6d5vq\\\" \" with result \"range_response_count:1 size:2761\" took too long (185.638555ms) to execute\n2021-05-24 22:11:40.079775 W | etcdserver: read-only range request \"key:\\\"/registry/events/kubelet-test-3785/busybox-host-aliasesb5577340-d794-4d7d-a4ff-461869263a9b.16822043d1e08d39\\\" \" with result \"range_response_count:1 size:1003\" took too long (202.280346ms) to execute\n2021-05-24 22:11:40.079889 W | etcdserver: read-only range request \"key:\\\"/registry/pods/job-7996/foo-9zljg\\\" \" with result \"range_response_count:1 size:2760\" took too long (185.792737ms) to execute\n2021-05-24 22:11:40.277755 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/container-runtime-8263/\\\" range_end:\\\"/registry/resourcequotas/container-runtime-82630\\\" \" with result \"range_response_count:0 size:6\" took too long (142.830512ms) to execute\n2021-05-24 22:11:40.277849 W | etcdserver: read-only range request \"key:\\\"/registry/ingress/pod-network-test-3621/\\\" range_end:\\\"/registry/ingress/pod-network-test-36210\\\" \" with result \"range_response_count:0 size:6\" took too long (184.907031ms) to execute\n2021-05-24 22:11:40.477683 W | etcdserver: read-only range request \"key:\\\"/registry/ingress/pod-network-test-3621/\\\" range_end:\\\"/registry/ingress/pod-network-test-36210\\\" \" with result \"range_response_count:0 size:6\" took too long (195.749945ms) to execute\n2021-05-24 22:11:40.878378 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/pod-network-test-3621/\\\" range_end:\\\"/registry/services/endpoints/pod-network-test-36210\\\" \" with result \"range_response_count:0 size:6\" took too long (195.907017ms) to execute\n2021-05-24 22:11:40.878592 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/kubelet-test-3785/\\\" range_end:\\\"/registry/resourcequotas/kubelet-test-37850\\\" \" with result \"range_response_count:0 size:6\" took too long (195.981373ms) to execute\n2021-05-24 22:11:40.878750 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-runtime-8263/termination-message-containerf8c1cc24-3689-4fa3-b005-f19613d7bf88\\\" \" with result \"range_response_count:1 size:1518\" took too long (193.927711ms) to execute\n2021-05-24 22:11:40.878882 W | etcdserver: read-only range request \"key:\\\"/registry/daemonsets/kubectl-4625/\\\" range_end:\\\"/registry/daemonsets/kubectl-46250\\\" \" with result \"range_response_count:0 size:6\" took too long (195.9053ms) to execute\n2021-05-24 22:11:47.328136 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:11:57.328110 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:12:07.328418 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:12:17.328643 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:12:27.328826 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:12:34.677117 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (100.869578ms) to execute\n2021-05-24 22:12:34.677359 W | etcdserver: read-only range request \"key:\\\"/registry/events/replication-controller-6652/condition-test.16822050b141eae8\\\" \" with result \"range_response_count:1 size:809\" took too long (192.289223ms) to execute\n2021-05-24 22:12:34.677421 W | etcdserver: read-only range request \"key:\\\"/registry/pods/replication-controller-6652/condition-test-cq95t\\\" \" with result \"range_response_count:0 size:6\" took too long (144.023462ms) to execute\n2021-05-24 22:12:34.677459 W | etcdserver: read-only range request \"key:\\\"/registry/pods/replication-controller-6652/condition-test-m266x\\\" \" with result \"range_response_count:0 size:6\" took too long (148.652391ms) to execute\n2021-05-24 22:12:34.677520 W | etcdserver: read-only range request \"key:\\\"/registry/pods/replication-controller-6652/condition-test-m266x\\\" \" with result \"range_response_count:0 size:6\" took too long (155.570037ms) to execute\n2021-05-24 22:12:34.677562 W | etcdserver: read-only range request \"key:\\\"/registry/pods/replication-controller-6652/condition-test-cq95t\\\" \" with result \"range_response_count:0 size:6\" took too long (148.957952ms) to execute\n2021-05-24 22:12:34.677664 W | etcdserver: read-only range request \"key:\\\"/registry/pods/statefulset-1408/ss2-0\\\" \" with result \"range_response_count:1 size:3582\" took too long (145.511455ms) to execute\n2021-05-24 22:12:34.677728 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-5325/webserver-deployment-847dcfb7fb-ts5hg\\\" \" with result \"range_response_count:1 size:2321\" took too long (180.133895ms) to execute\n2021-05-24 22:12:37.330937 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:12:47.328281 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:12:57.328643 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:13:02.178680 W | etcdserver: read-only range request \"key:\\\"/registry/pods/gc-6338/simpletest.rc-gnbhd\\\" \" with result \"range_response_count:1 size:1626\" took too long (177.197127ms) to execute\n2021-05-24 22:13:02.276424 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:492\" took too long (150.815643ms) to execute\n2021-05-24 22:13:02.276496 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/deployment-5325\\\" \" with result \"range_response_count:0 size:6\" took too long (126.758169ms) to execute\n2021-05-24 22:13:02.276572 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:519\" took too long (151.240359ms) to execute\n2021-05-24 22:13:02.276614 W | etcdserver: read-only range request \"key:\\\"/registry/pods/gc-6338/simpletest.rc-b8mw6\\\" \" with result \"range_response_count:1 size:2194\" took too long (151.207059ms) to execute\n2021-05-24 22:13:02.276734 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kube-system/kube-controller-manager-v1.21-control-plane\\\" \" with result \"range_response_count:1 size:6752\" took too long (167.622553ms) to execute\n2021-05-24 22:13:02.383506 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (101.610685ms) to execute\n2021-05-24 22:13:07.329429 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:13:14.539632 I | etcdserver: start to snapshot (applied: 190019, lastsnap: 180018)\n2021-05-24 22:13:14.543469 I | etcdserver: saved snapshot at index 190019\n2021-05-24 22:13:14.544255 I | etcdserver: compacted raft log at 185019\n2021-05-24 22:13:17.328881 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:13:19.615138 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-00000000000222ee.snap successfully\n2021-05-24 22:13:27.328401 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:13:37.329046 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:13:47.328817 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:13:50.181610 I | mvcc: store.index: compact 174232\n2021-05-24 22:13:50.325365 I | mvcc: finished scheduled compaction at 174232 (took 137.018657ms)\n2021-05-24 22:13:57.328820 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:14:07.328710 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:14:17.329169 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:14:27.328397 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:14:32.678036 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (201.980285ms) to execute\n2021-05-24 22:14:32.678280 W | etcdserver: read-only range request \"key:\\\"/registry/pods/statefulset-9073/test-pod\\\" \" with result \"range_response_count:1 size:1920\" took too long (192.38372ms) to execute\n2021-05-24 22:14:33.176255 W | etcdserver: read-only range request \"key:\\\"/registry/ranges/serviceips\\\" \" with result \"range_response_count:1 size:7939\" took too long (194.285646ms) to execute\n2021-05-24 22:14:33.176721 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:645\" took too long (161.026344ms) to execute\n2021-05-24 22:14:33.176809 W | etcdserver: read-only range request \"key:\\\"/registry/pods/services-1321/affinity-nodeport-vtspw\\\" \" with result \"range_response_count:1 size:3285\" took too long (186.263034ms) to execute\n2021-05-24 22:14:33.176849 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubectl-4624/agnhost-primary-qccjs\\\" \" with result \"range_response_count:1 size:3383\" took too long (139.762688ms) to execute\n2021-05-24 22:14:33.176939 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (161.350671ms) to execute\n2021-05-24 22:14:33.177048 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/kubectl-3444/\\\" range_end:\\\"/registry/resourcequotas/kubectl-34440\\\" \" with result \"range_response_count:0 size:6\" took too long (144.271531ms) to execute\n2021-05-24 22:14:33.477611 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/kubectl-3444/default\\\" \" with result \"range_response_count:1 size:186\" took too long (189.147288ms) to execute\n2021-05-24 22:14:33.776389 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (101.930515ms) to execute\n2021-05-24 22:14:37.328321 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:14:47.329016 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:14:57.328631 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:15:07.328196 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:15:17.328310 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:15:27.328127 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:15:30.479021 W | etcdserver: read-only range request \"key:\\\"/registry/projectcontour.io/tlscertificatedelegations/crd-publish-openapi-7906/\\\" range_end:\\\"/registry/projectcontour.io/tlscertificatedelegations/crd-publish-openapi-79060\\\" \" with result \"range_response_count:0 size:6\" took too long (101.560156ms) to execute\n2021-05-24 22:15:30.678440 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/crd-publish-openapi-7906\\\" \" with result \"range_response_count:1 size:1934\" took too long (174.962889ms) to execute\n2021-05-24 22:15:37.328933 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:15:47.329137 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:15:54.776975 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (102.737757ms) to execute\n2021-05-24 22:15:54.777039 W | etcdserver: read-only range request \"key:\\\"/registry/ingress/secrets-6337/\\\" range_end:\\\"/registry/ingress/secrets-63370\\\" \" with result \"range_response_count:0 size:6\" took too long (191.148629ms) to execute\n2021-05-24 22:15:54.777095 W | etcdserver: read-only range request \"key:\\\"/registry/pods/services-1520/affinity-clusterip-transition-t6hzg\\\" \" with result \"range_response_count:1 size:3402\" took too long (177.549298ms) to execute\n2021-05-24 22:15:57.329377 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:16:07.329076 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:16:17.329223 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:16:27.328973 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:16:37.328944 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:16:47.328817 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:16:57.328666 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:17:07.329104 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:17:17.328086 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:17:22.279553 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (102.783259ms) to execute\n2021-05-24 22:17:22.279873 W | etcdserver: read-only range request \"key:\\\"/registry/cronjobs/cronjob-4912/suspended\\\" \" with result \"range_response_count:1 size:1288\" took too long (205.303818ms) to execute\n2021-05-24 22:17:27.329110 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:17:37.328484 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:17:47.375860 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:17:57.328381 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:18:07.328262 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:18:12.281994 W | etcdserver: read-only range request \"key:\\\"/registry/ingress/\\\" range_end:\\\"/registry/ingress0\\\" count_only:true \" with result \"range_response_count:0 size:6\" took too long (107.616783ms) to execute\n2021-05-24 22:18:17.328191 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:18:27.328920 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:18:37.329169 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:18:47.176479 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (258.798658ms) to execute\n2021-05-24 22:18:47.176601 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:645\" took too long (287.385674ms) to execute\n2021-05-24 22:18:47.328036 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:18:50.186572 I | mvcc: store.index: compact 184741\n2021-05-24 22:18:50.363689 I | mvcc: finished scheduled compaction at 184741 (took 169.268304ms)\n2021-05-24 22:18:57.329052 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:19:07.328035 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:19:17.328458 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:19:27.329195 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:19:37.328430 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:19:47.328988 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:19:57.328125 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:20:07.328862 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:20:17.329248 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:20:27.329220 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:20:37.328358 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:20:47.328998 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:20:57.328910 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:21:06.478600 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (126.311536ms) to execute\n2021-05-24 22:21:07.328846 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:21:17.328481 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:21:27.328911 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:21:36.976477 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/default\\\" \" with result \"range_response_count:1 size:342\" took too long (188.997482ms) to execute\n2021-05-24 22:21:36.976677 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (301.503098ms) to execute\n2021-05-24 22:21:37.328507 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:21:37.676290 W | etcdserver: read-only range request \"key:\\\"/registry/pods/daemonsets-7251/daemon-set-dsvcs\\\" \" with result \"range_response_count:1 size:4206\" took too long (378.03661ms) to execute\n2021-05-24 22:21:47.329254 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:21:57.329057 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:22:07.328668 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:22:17.328786 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:22:27.328189 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:22:31.577249 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (205.472376ms) to execute\n2021-05-24 22:22:37.328842 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:22:41.481045 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:492\" took too long (130.862252ms) to execute\n2021-05-24 22:22:41.481159 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (109.177739ms) to execute\n2021-05-24 22:22:41.481376 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:645\" took too long (100.998797ms) to execute\n2021-05-24 22:22:47.328382 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:22:57.328744 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:23:05.876740 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:493\" took too long (188.829447ms) to execute\n2021-05-24 22:23:05.876826 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (190.890384ms) to execute\n2021-05-24 22:23:07.329207 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:23:17.328825 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:23:27.329103 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:23:30.756657 I | etcdserver: start to snapshot (applied: 200020, lastsnap: 190019)\n2021-05-24 22:23:30.758752 I | etcdserver: saved snapshot at index 200020\n2021-05-24 22:23:30.759332 I | etcdserver: compacted raft log at 195020\n2021-05-24 22:23:37.328886 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:23:47.328468 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:23:49.622088 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-00000000000249ff.snap successfully\n2021-05-24 22:23:50.190767 I | mvcc: store.index: compact 191177\n2021-05-24 22:23:50.300841 I | mvcc: finished scheduled compaction at 191177 (took 105.441267ms)\n2021-05-24 22:23:57.328640 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:23:57.877130 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-wrapper-3957/wrapped-volume-race-519e0f2e-c90d-4c62-84e4-2af78b360cfa-2kmrg\\\" \" with result \"range_response_count:1 size:19303\" took too long (190.971466ms) to execute\n2021-05-24 22:23:57.877221 W | etcdserver: read-only range request \"key:\\\"/registry/events/\\\" range_end:\\\"/registry/events0\\\" count_only:true \" with result \"range_response_count:0 size:9\" took too long (129.310342ms) to execute\n2021-05-24 22:23:58.176286 W | etcdserver: read-only range request \"key:\\\"/registry/ranges/serviceips\\\" \" with result \"range_response_count:1 size:7939\" took too long (295.007704ms) to execute\n2021-05-24 22:23:58.176513 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (100.532783ms) to execute\n2021-05-24 22:23:58.176777 W | etcdserver: read-only range request \"key:\\\"/registry/events/emptydir-wrapper-3957/wrapped-volume-race-519e0f2e-c90d-4c62-84e4-2af78b360cfa-ptlxj.168220f0b90daadd\\\" \" with result \"range_response_count:1 size:1032\" took too long (103.120523ms) to execute\n2021-05-24 22:23:58.376611 W | etcdserver: read-only range request \"key:\\\"/registry/events/emptydir-wrapper-3957/wrapped-volume-race-519e0f2e-c90d-4c62-84e4-2af78b360cfa-mzjt7.168220ef47699e15\\\" \" with result \"range_response_count:1 size:1033\" took too long (103.723357ms) to execute\n2021-05-24 22:23:58.376789 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:492\" took too long (123.831079ms) to execute\n2021-05-24 22:23:58.578967 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (101.372603ms) to execute\n2021-05-24 22:23:58.579176 W | etcdserver: read-only range request \"key:\\\"/registry/events/emptydir-wrapper-3957/wrapped-volume-race-519e0f2e-c90d-4c62-84e4-2af78b360cfa-2kmrg.168220f0e8b5cbaf\\\" \" with result \"range_response_count:1 size:1033\" took too long (107.141931ms) to execute\n2021-05-24 22:23:58.579221 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-wrapper-3957/wrapped-volume-race-519e0f2e-c90d-4c62-84e4-2af78b360cfa-mzjt7\\\" \" with result \"range_response_count:1 size:19303\" took too long (158.195972ms) to execute\n2021-05-24 22:23:58.784112 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-wrapper-3957/wrapped-volume-race-519e0f2e-c90d-4c62-84e4-2af78b360cfa-ptlxj\\\" \" with result \"range_response_count:1 size:19303\" took too long (103.059964ms) to execute\n2021-05-24 22:24:07.328549 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:24:17.328160 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:24:22.676194 W | etcdserver: request \"header: lease_revoke:\" with result \"size:29\" took too long (199.817961ms) to execute\n2021-05-24 22:24:22.676467 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:645\" took too long (255.218685ms) to execute\n2021-05-24 22:24:22.676619 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:520\" took too long (348.557538ms) to execute\n2021-05-24 22:24:22.676658 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (255.27415ms) to execute\n2021-05-24 22:24:22.676763 W | etcdserver: read-only range request \"key:\\\"/registry/events/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-fbv29.168220f50cdb7a25\\\" \" with result \"range_response_count:1 size:1034\" took too long (217.904195ms) to execute\n2021-05-24 22:24:23.376668 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (500.776387ms) to execute\n2021-05-24 22:24:23.377295 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:493\" took too long (680.265742ms) to execute\n2021-05-24 22:24:23.377406 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-xgj6n\\\" \" with result \"range_response_count:1 size:19679\" took too long (610.515748ms) to execute\n2021-05-24 22:24:23.377524 W | etcdserver: read-only range request \"key:\\\"/registry/events/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-t5wrx.168220f60738dc7c\\\" \" with result \"range_response_count:1 size:1034\" took too long (693.816382ms) to execute\n2021-05-24 22:24:23.377553 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-fbv29\\\" \" with result \"range_response_count:1 size:19679\" took too long (611.928876ms) to execute\n2021-05-24 22:24:23.377578 W | etcdserver: read-only range request \"key:\\\"/registry/runtimeclasses/\\\" range_end:\\\"/registry/runtimeclasses0\\\" count_only:true \" with result \"range_response_count:0 size:6\" took too long (625.336648ms) to execute\n2021-05-24 22:24:23.377671 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-t5wrx\\\" \" with result \"range_response_count:1 size:19679\" took too long (611.992643ms) to execute\n2021-05-24 22:24:23.377720 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-6bm7c\\\" \" with result \"range_response_count:1 size:19679\" took too long (606.011834ms) to execute\n2021-05-24 22:24:23.378085 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-zlldc\\\" \" with result \"range_response_count:1 size:19679\" took too long (606.130821ms) to execute\n2021-05-24 22:24:23.378219 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-6bm7c\\\" \" with result \"range_response_count:1 size:19679\" took too long (647.465894ms) to execute\n2021-05-24 22:24:24.276663 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (600.781718ms) to execute\n2021-05-24 22:24:24.276980 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (601.850379ms) to execute\n2021-05-24 22:24:24.277082 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-t5wrx\\\" \" with result \"range_response_count:1 size:19679\" took too long (888.199318ms) to execute\n2021-05-24 22:24:24.277222 W | etcdserver: read-only range request \"key:\\\"/registry/events/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-t5wrx.168220f60738dc7c\\\" \" with result \"range_response_count:1 size:1034\" took too long (888.090504ms) to execute\n2021-05-24 22:24:25.076008 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (400.367276ms) to execute\n2021-05-24 22:24:25.076127 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-t5wrx\\\" \" with result \"range_response_count:1 size:19679\" took too long (788.777316ms) to execute\n2021-05-24 22:24:25.076533 W | etcdserver: read-only range request \"key:\\\"/registry/mutatingwebhookconfigurations/\\\" range_end:\\\"/registry/mutatingwebhookconfigurations0\\\" count_only:true \" with result \"range_response_count:0 size:6\" took too long (349.117749ms) to execute\n2021-05-24 22:24:25.076638 W | etcdserver: read-only range request \"key:\\\"/registry/events/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-t5wrx.168220f60738dc7c\\\" \" with result \"range_response_count:1 size:1034\" took too long (788.927893ms) to execute\n2021-05-24 22:24:26.276324 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:645\" took too long (878.366476ms) to execute\n2021-05-24 22:24:26.276432 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (884.165156ms) to execute\n2021-05-24 22:24:26.276467 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (602.532609ms) to execute\n2021-05-24 22:24:26.276491 W | etcdserver: read-only range request \"key:\\\"/registry/events/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-fbv29.168220f50cdb7a25\\\" \" with result \"range_response_count:1 size:1034\" took too long (787.062241ms) to execute\n2021-05-24 22:24:26.276539 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-6bm7c\\\" \" with result \"range_response_count:1 size:19679\" took too long (839.101045ms) to execute\n2021-05-24 22:24:26.276576 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:520\" took too long (878.874595ms) to execute\n2021-05-24 22:24:26.276754 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (383.262444ms) to execute\n2021-05-24 22:24:26.776976 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (400.524385ms) to execute\n2021-05-24 22:24:26.777636 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-6bm7c\\\" \" with result \"range_response_count:1 size:19679\" took too long (491.715984ms) to execute\n2021-05-24 22:24:26.777740 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:493\" took too long (489.111846ms) to execute\n2021-05-24 22:24:26.777852 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (103.360513ms) to execute\n2021-05-24 22:24:26.777972 W | etcdserver: read-only range request \"key:\\\"/registry/events/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-t5wrx.168220f60738dc7c\\\" \" with result \"range_response_count:1 size:1034\" took too long (489.020972ms) to execute\n2021-05-24 22:24:27.576433 W | etcdserver: request \"header: txn: success: > failure: >>\" with result \"size:19693\" took too long (499.580551ms) to execute\n2021-05-24 22:24:27.576712 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:24:27.577150 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/default\\\" \" with result \"range_response_count:1 size:342\" took too long (779.88087ms) to execute\n2021-05-24 22:24:27.577219 W | etcdserver: read-only range request \"key:\\\"/registry/services/specs/\\\" range_end:\\\"/registry/services/specs0\\\" count_only:true \" with result \"range_response_count:0 size:8\" took too long (250.919272ms) to execute\n2021-05-24 22:24:27.577346 W | etcdserver: read-only range request \"key:\\\"/registry/events/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-6bm7c.168220f56c12a776\\\" \" with result \"range_response_count:1 size:1033\" took too long (791.27423ms) to execute\n2021-05-24 22:24:28.176192 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (499.754611ms) to execute\n2021-05-24 22:24:28.176678 W | etcdserver: read-only range request \"key:\\\"/registry/services/specs/default/kubernetes\\\" \" with result \"range_response_count:1 size:666\" took too long (594.519248ms) to execute\n2021-05-24 22:24:28.176771 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-6bm7c\\\" \" with result \"range_response_count:0 size:6\" took too long (588.920673ms) to execute\n2021-05-24 22:24:28.176867 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (502.757779ms) to execute\n2021-05-24 22:24:28.176908 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/emptydir-wrapper-3957/racey-configmap-0\\\" \" with result \"range_response_count:1 size:318\" took too long (581.977544ms) to execute\n2021-05-24 22:24:29.076274 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (279.862774ms) to execute\n2021-05-24 22:24:29.076317 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/emptydir-wrapper-3957/racey-configmap-1\\\" \" with result \"range_response_count:1 size:318\" took too long (891.302205ms) to execute\n2021-05-24 22:24:29.076358 W | etcdserver: read-only range request \"key:\\\"/registry/events/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-zlldc.168220f560282c02\\\" \" with result \"range_response_count:1 size:1034\" took too long (891.446672ms) to execute\n2021-05-24 22:24:29.076423 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:520\" took too long (287.122402ms) to execute\n2021-05-24 22:24:29.076464 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (401.37665ms) to execute\n2021-05-24 22:24:29.076500 W | etcdserver: read-only range request \"key:\\\"/registry/masterleases/172.18.0.3\\\" \" with result \"range_response_count:1 size:133\" took too long (897.871899ms) to execute\n2021-05-24 22:24:29.076901 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:645\" took too long (280.163069ms) to execute\n2021-05-24 22:24:29.476425 W | etcdserver: request \"header: lease_grant:\" with result \"size:41\" took too long (200.335447ms) to execute\n2021-05-24 22:24:29.477348 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/emptydir-wrapper-3957/racey-configmap-2\\\" \" with result \"range_response_count:1 size:318\" took too long (394.236795ms) to execute\n2021-05-24 22:24:29.976640 W | etcdserver: request \"header: txn: success: > failure: >>\" with result \"size:20\" took too long (299.971595ms) to execute\n2021-05-24 22:24:29.976924 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/default/kubernetes\\\" \" with result \"range_response_count:1 size:419\" took too long (495.871534ms) to execute\n2021-05-24 22:24:29.976992 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:493\" took too long (390.702465ms) to execute\n2021-05-24 22:24:29.977084 W | etcdserver: read-only range request \"key:\\\"/registry/events/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-fbv29.168220f50cdb7a25\\\" \" with result \"range_response_count:1 size:1034\" took too long (490.431066ms) to execute\n2021-05-24 22:24:29.977128 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (302.704242ms) to execute\n2021-05-24 22:24:29.977238 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:22641\" took too long (429.732457ms) to execute\n2021-05-24 22:24:30.476419 W | etcdserver: read-only range request \"key:\\\"/registry/events/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-6bm7c.168220f56c12a776\\\" \" with result \"range_response_count:1 size:1034\" took too long (490.565601ms) to execute\n2021-05-24 22:24:30.476480 W | etcdserver: read-only range request \"key:\\\"/registry/endpointslices/default/kubernetes\\\" \" with result \"range_response_count:1 size:478\" took too long (493.393706ms) to execute\n2021-05-24 22:24:30.476728 W | etcdserver: request \"header: txn: success: > failure: >>\" with result \"size:20\" took too long (200.385375ms) to execute\n2021-05-24 22:24:31.076300 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (733.991038ms) to execute\n2021-05-24 22:24:31.076473 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (400.315963ms) to execute\n2021-05-24 22:24:31.076818 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/emptydir-wrapper-3957/racey-configmap-4\\\" \" with result \"range_response_count:1 size:318\" took too long (596.198925ms) to execute\n2021-05-24 22:24:31.076856 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (401.913144ms) to execute\n2021-05-24 22:24:31.776109 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (101.545906ms) to execute\n2021-05-24 22:24:31.776174 W | etcdserver: read-only range request \"key:\\\"/registry/storageclasses/\\\" range_end:\\\"/registry/storageclasses0\\\" count_only:true \" with result \"range_response_count:0 size:8\" took too long (363.253162ms) to execute\n2021-05-24 22:24:31.776259 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/emptydir-wrapper-3957/racey-configmap-5\\\" \" with result \"range_response_count:1 size:318\" took too long (694.401342ms) to execute\n2021-05-24 22:24:31.776383 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (403.712677ms) to execute\n2021-05-24 22:24:31.776523 W | etcdserver: read-only range request \"key:\\\"/registry/events/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-t5wrx.168220f60738dc7c\\\" \" with result \"range_response_count:1 size:1034\" took too long (688.752487ms) to execute\n2021-05-24 22:24:31.776605 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:645\" took too long (289.684741ms) to execute\n2021-05-24 22:24:31.776664 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:520\" took too long (284.12593ms) to execute\n2021-05-24 22:24:31.776752 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (289.758391ms) to execute\n2021-05-24 22:24:32.176738 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (200.600331ms) to execute\n2021-05-24 22:24:32.177519 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/emptydir-wrapper-3957/racey-configmap-6\\\" \" with result \"range_response_count:1 size:318\" took too long (394.920734ms) to execute\n2021-05-24 22:24:32.481996 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (204.698089ms) to execute\n2021-05-24 22:24:32.487341 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:493\" took too long (405.541213ms) to execute\n2021-05-24 22:24:32.487384 W | etcdserver: read-only range request \"key:\\\"/registry/endpointslices/\\\" range_end:\\\"/registry/endpointslices0\\\" count_only:true \" with result \"range_response_count:0 size:8\" took too long (290.504046ms) to execute\n2021-05-24 22:24:32.487480 W | etcdserver: read-only range request \"key:\\\"/registry/events/emptydir-wrapper-3957/wrapped-volume-race-383bc41a-0eed-47aa-bbf3-63176eaba0e3-zlldc.168220f560282c02\\\" \" with result \"range_response_count:1 size:1034\" took too long (305.817595ms) to execute\n2021-05-24 22:24:37.328548 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:24:47.330218 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:24:48.776629 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (299.975699ms) to execute\n2021-05-24 22:24:48.777016 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:493\" took too long (201.23156ms) to execute\n2021-05-24 22:24:48.777069 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (102.174024ms) to execute\n2021-05-24 22:24:50.676420 W | etcdserver: read-only range request \"key:\\\"/registry/volumeattachments/\\\" range_end:\\\"/registry/volumeattachments0\\\" count_only:true \" with result \"range_response_count:0 size:6\" took too long (236.298393ms) to execute\n2021-05-24 22:24:57.328691 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:25:07.328395 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:25:17.328944 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:25:27.329073 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:25:37.328463 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:25:47.328525 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:25:57.329124 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:26:07.328363 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:26:17.328288 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:26:27.328234 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:26:37.328841 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:26:43.777141 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (102.611546ms) to execute\n2021-05-24 22:26:43.777186 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:493\" took too long (176.894004ms) to execute\n2021-05-24 22:26:47.328959 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:26:57.328800 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:27:07.328952 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:27:17.329037 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:27:27.328718 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:27:37.328765 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:27:47.328059 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:27:57.330236 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:28:07.329287 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:28:17.328590 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:28:27.329256 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:28:37.328577 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:28:47.328700 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:28:50.195472 I | mvcc: store.index: compact 192449\n2021-05-24 22:28:50.212562 I | mvcc: finished scheduled compaction at 192449 (took 15.805667ms)\n2021-05-24 22:28:57.328615 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:29:07.328515 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:29:17.329122 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:29:27.329123 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:29:37.328868 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:29:39.076827 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (100.751889ms) to execute\n2021-05-24 22:29:39.077180 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (117.499848ms) to execute\n2021-05-24 22:29:39.376725 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (100.650031ms) to execute\n2021-05-24 22:29:47.328788 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:29:57.329111 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:30:07.328622 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:30:17.329317 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:30:26.976253 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (100.074721ms) to execute\n2021-05-24 22:30:27.328447 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:30:37.328711 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:30:47.328702 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:30:57.329007 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:31:07.329098 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:31:17.328768 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:31:27.329333 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:31:37.328905 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:31:47.329140 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:31:56.976936 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (100.348442ms) to execute\n2021-05-24 22:31:57.328286 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:32:07.377492 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:22528\" took too long (196.597536ms) to execute\n2021-05-24 22:32:07.377637 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:32:07.576578 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:22528\" took too long (189.014396ms) to execute\n2021-05-24 22:32:07.576726 W | etcdserver: read-only range request \"key:\\\"/registry/cronjobs/sched-preemption-path-8460/\\\" range_end:\\\"/registry/cronjobs/sched-preemption-path-84600\\\" \" with result \"range_response_count:0 size:6\" took too long (185.340052ms) to execute\n2021-05-24 22:32:07.576934 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (171.643924ms) to execute\n2021-05-24 22:32:17.328803 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:32:27.329510 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:32:37.328934 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:32:47.329351 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:32:57.329176 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:33:07.329065 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:33:17.329200 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:33:27.329245 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:33:37.328850 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:33:47.328300 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:33:50.376169 I | mvcc: store.index: compact 193995\n2021-05-24 22:33:50.676228 W | etcdserver: read-only range request \"key:\\\"/registry/statefulsets/\\\" range_end:\\\"/registry/statefulsets0\\\" count_only:true \" with result \"range_response_count:0 size:6\" took too long (193.4578ms) to execute\n2021-05-24 22:33:50.702288 I | mvcc: finished scheduled compaction at 193995 (took 324.60012ms)\n2021-05-24 22:33:57.328805 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:34:05.976395 W | etcdserver: read-only range request \"key:\\\"/registry/apiextensions.k8s.io/customresourcedefinitions/\\\" range_end:\\\"/registry/apiextensions.k8s.io/customresourcedefinitions0\\\" count_only:true \" with result \"range_response_count:0 size:8\" took too long (193.000666ms) to execute\n2021-05-24 22:34:07.328397 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:34:17.328707 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:34:27.328131 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:34:37.328950 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:34:47.329386 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:34:57.328778 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:35:07.328958 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:35:17.329281 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:35:19.176129 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (305.098606ms) to execute\n2021-05-24 22:35:19.176229 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:645\" took too long (809.351924ms) to execute\n2021-05-24 22:35:19.176252 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (809.20869ms) to execute\n2021-05-24 22:35:19.176318 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (500.481204ms) to execute\n2021-05-24 22:35:19.176610 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:493\" took too long (817.765226ms) to execute\n2021-05-24 22:35:19.876372 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (500.115083ms) to execute\n2021-05-24 22:35:19.876876 W | etcdserver: read-only range request \"key:\\\"/registry/pods/sched-pred-5402/pod5\\\" \" with result \"range_response_count:1 size:2260\" took too long (231.763031ms) to execute\n2021-05-24 22:35:19.876920 W | etcdserver: read-only range request \"key:\\\"/registry/prioritylevelconfigurations/\\\" range_end:\\\"/registry/prioritylevelconfigurations0\\\" count_only:true \" with result \"range_response_count:0 size:8\" took too long (584.630736ms) to execute\n2021-05-24 22:35:19.877010 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (202.175747ms) to execute\n2021-05-24 22:35:20.776325 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (100.712839ms) to execute\n2021-05-24 22:35:20.776407 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:520\" took too long (440.242466ms) to execute\n2021-05-24 22:35:20.776455 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:22683\" took too long (529.374012ms) to execute\n2021-05-24 22:35:21.376425 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (180.269995ms) to execute\n2021-05-24 22:35:22.176004 W | etcdserver: read-only range request \"key:\\\"/registry/pods/sched-pred-5402/pod5\\\" \" with result \"range_response_count:1 size:2260\" took too long (531.234599ms) to execute\n2021-05-24 22:35:22.176053 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:493\" took too long (289.762354ms) to execute\n2021-05-24 22:35:22.176172 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (501.37181ms) to execute\n2021-05-24 22:35:22.176203 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:645\" took too long (289.639424ms) to execute\n2021-05-24 22:35:22.176399 W | etcdserver: read-only range request \"key:\\\"/registry/deployments/\\\" range_end:\\\"/registry/deployments0\\\" count_only:true \" with result \"range_response_count:0 size:8\" took too long (202.277054ms) to execute\n2021-05-24 22:35:22.576585 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (423.531345ms) to execute\n2021-05-24 22:35:22.576711 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (100.655738ms) to execute\n2021-05-24 22:35:22.577179 W | etcdserver: read-only range request \"key:\\\"/registry/apiregistration.k8s.io/apiservices/\\\" range_end:\\\"/registry/apiregistration.k8s.io/apiservices0\\\" count_only:true \" with result \"range_response_count:0 size:8\" took too long (284.558739ms) to execute\n2021-05-24 22:35:23.076223 W | etcdserver: read-only range request \"key:\\\"/registry/ingress/\\\" range_end:\\\"/registry/ingress0\\\" count_only:true \" with result \"range_response_count:0 size:6\" took too long (184.394285ms) to execute\n2021-05-24 22:35:23.076309 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:520\" took too long (282.871383ms) to execute\n2021-05-24 22:35:23.776022 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (101.996448ms) to execute\n2021-05-24 22:35:23.776115 W | etcdserver: read-only range request \"key:\\\"/registry/pods/sched-pred-5402/pod5\\\" \" with result \"range_response_count:1 size:2260\" took too long (131.008222ms) to execute\n2021-05-24 22:35:23.776172 W | etcdserver: read-only range request \"key:\\\"/registry/pods/\\\" range_end:\\\"/registry/pods0\\\" count_only:true \" with result \"range_response_count:0 size:8\" took too long (517.154083ms) to execute\n2021-05-24 22:35:23.776257 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (385.102877ms) to execute\n2021-05-24 22:35:27.328674 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:35:37.328799 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:35:47.329060 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:35:57.328086 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:36:07.330197 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:36:17.328130 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:36:27.328448 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:36:37.328127 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:36:47.328498 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:36:57.328591 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:37:07.328116 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:37:17.328650 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:37:27.329129 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:37:31.678165 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:493\" took too long (128.754381ms) to execute\n2021-05-24 22:37:33.777620 W | etcdserver: read-only range request \"key:\\\"/registry/pods/sched-pred-5402/pod5\\\" \" with result \"range_response_count:1 size:2260\" took too long (131.473169ms) to execute\n2021-05-24 22:37:33.777751 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (103.010654ms) to execute\n2021-05-24 22:37:37.328248 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:37:47.328841 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:37:57.328131 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:38:07.328497 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:38:15.876206 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/resourcequota-priorityclass-8974/quota-priorityclass\\\" \" with result \"range_response_count:1 size:676\" took too long (122.222239ms) to execute\n2021-05-24 22:38:15.876269 W | etcdserver: read-only range request \"key:\\\"/registry/pods/clientset-1735/pod9b2b8395-0449-4621-abfc-3a9c4c13e61d\\\" \" with result \"range_response_count:1 size:3276\" took too long (103.490758ms) to execute\n2021-05-24 22:38:15.977543 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/apply-3656\\\" \" with result \"range_response_count:1 size:1878\" took too long (195.587528ms) to execute\n2021-05-24 22:38:15.977615 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/resourcequota-4136/\\\" range_end:\\\"/registry/resourcequotas/resourcequota-41360\\\" \" with result \"range_response_count:0 size:6\" took too long (111.772733ms) to execute\n2021-05-24 22:38:15.977716 W | etcdserver: read-only range request \"key:\\\"/registry/jobs/gc-2803/\\\" range_end:\\\"/registry/jobs/gc-28030\\\" \" with result \"range_response_count:0 size:6\" took too long (115.710212ms) to execute\n2021-05-24 22:38:16.576430 W | etcdserver: read-only range request \"key:\\\"/registry/podtemplates/discovery-4108/\\\" range_end:\\\"/registry/podtemplates/discovery-41080\\\" \" with result \"range_response_count:0 size:6\" took too long (293.675667ms) to execute\n2021-05-24 22:38:16.576714 W | etcdserver: request \"header: txn: success:> failure:<>>\" with result \"size:18\" took too long (197.19136ms) to execute\n2021-05-24 22:38:16.577336 W | etcdserver: read-only range request \"key:\\\"/registry/jobs/gc-2803/\\\" range_end:\\\"/registry/jobs/gc-28030\\\" \" with result \"range_response_count:0 size:6\" took too long (215.937094ms) to execute\n2021-05-24 22:38:16.577546 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/apply-4650/\\\" range_end:\\\"/registry/resourcequotas/apply-46500\\\" \" with result \"range_response_count:0 size:6\" took too long (231.494588ms) to execute\n2021-05-24 22:38:16.676100 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/resourcequota-8891/quota-for-e2e-test-resourcequota-2099-crds\\\" \" with result \"range_response_count:1 size:740\" took too long (163.162778ms) to execute\n2021-05-24 22:38:16.676299 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/resourcequota-1908/test-quota\\\" \" with result \"range_response_count:1 size:3247\" took too long (132.697713ms) to execute\n2021-05-24 22:38:16.778336 W | etcdserver: request \"header: txn: success:> failure:<>>\" with result \"size:18\" took too long (101.729928ms) to execute\n2021-05-24 22:38:16.778669 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/discovery-4108/\\\" range_end:\\\"/registry/services/endpoints/discovery-41080\\\" \" with result \"range_response_count:0 size:6\" took too long (197.843463ms) to execute\n2021-05-24 22:38:16.778753 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (104.453855ms) to execute\n2021-05-24 22:38:16.778815 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/resourcequota-priorityclass-7590/default\\\" \" with result \"range_response_count:1 size:227\" took too long (198.476299ms) to execute\n2021-05-24 22:38:17.328464 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:38:26.877663 W | etcdserver: read-only range request \"key:\\\"/registry/pods/gc-7723/simpletest.rc-df2cj\\\" \" with result \"range_response_count:1 size:1626\" took too long (181.890475ms) to execute\n2021-05-24 22:38:26.877754 W | etcdserver: request \"header: txn: success:> failure:<>>\" with result \"size:18\" took too long (101.325185ms) to execute\n2021-05-24 22:38:26.976231 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/default\\\" \" with result \"range_response_count:1 size:342\" took too long (129.61784ms) to execute\n2021-05-24 22:38:26.976327 W | etcdserver: read-only range request \"key:\\\"/registry/jobs/gc-2803/\\\" range_end:\\\"/registry/jobs/gc-28030\\\" \" with result \"range_response_count:0 size:6\" took too long (115.099447ms) to execute\n2021-05-24 22:38:26.976688 W | etcdserver: read-only range request \"key:\\\"/registry/pods/gc-7723/simpletest.rc-df2cj\\\" \" with result \"range_response_count:1 size:1626\" took too long (101.772673ms) to execute\n2021-05-24 22:38:26.976781 W | etcdserver: read-only range request \"key:\\\"/registry/clusterrolebindings/\\\" range_end:\\\"/registry/clusterrolebindings0\\\" count_only:true \" with result \"range_response_count:0 size:8\" took too long (121.741084ms) to execute\n2021-05-24 22:38:27.282694 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (105.829611ms) to execute\n2021-05-24 22:38:27.285893 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/resourcequota-4136\\\" \" with result \"range_response_count:1 size:504\" took too long (166.235245ms) to execute\n2021-05-24 22:38:27.376010 W | etcdserver: read-only range request \"key:\\\"/registry/pods/clientset-1735/pod9b2b8395-0449-4621-abfc-3a9c4c13e61d\\\" \" with result \"range_response_count:0 size:6\" took too long (101.035571ms) to execute\n2021-05-24 22:38:27.376050 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (140.054916ms) to execute\n2021-05-24 22:38:27.376115 W | etcdserver: read-only range request \"key:\\\"/registry/mygroup.example.com/foo7nqrxas/canary9ppz6\\\" \" with result \"range_response_count:1 size:389\" took too long (116.907933ms) to execute\n2021-05-24 22:38:27.376237 W | etcdserver: read-only range request \"key:\\\"/registry/mygroup.example.com/foo7nqrxas/ownernt4t6\\\" \" with result \"range_response_count:0 size:6\" took too long (117.222224ms) to execute\n2021-05-24 22:38:27.376517 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (179.751056ms) to execute\n2021-05-24 22:38:27.376611 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:22528\" took too long (175.183708ms) to execute\n2021-05-24 22:38:27.376856 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:38:37.328953 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:38:47.328935 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:38:50.381128 I | mvcc: store.index: compact 195427\n2021-05-24 22:38:50.402318 I | mvcc: finished scheduled compaction at 195427 (took 19.300427ms)\n2021-05-24 22:38:57.328651 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:39:06.880291 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/gc-9776/\\\" range_end:\\\"/registry/services/endpoints/gc-97760\\\" \" with result \"range_response_count:0 size:6\" took too long (100.225302ms) to execute\n2021-05-24 22:39:07.328924 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:39:17.328898 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:39:27.328948 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:39:37.328689 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:39:47.328555 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:39:57.329031 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:40:07.328525 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:40:17.328437 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:40:27.328471 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:40:36.876848 W | etcdserver: read-only range request \"key:\\\"/registry/apiregistration.k8s.io/apiservices/\\\" range_end:\\\"/registry/apiregistration.k8s.io/apiservices0\\\" count_only:true \" with result \"range_response_count:0 size:8\" took too long (172.875332ms) to execute\n2021-05-24 22:40:37.328888 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:40:47.328811 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:40:57.328572 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:41:07.329032 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:41:17.328947 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:41:27.328451 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:41:37.331108 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:41:47.328493 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:41:57.329074 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:42:07.329371 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:42:17.328706 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:42:27.329123 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:42:37.329035 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:42:47.329075 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:42:57.329243 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:43:07.328949 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:43:17.328546 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:43:23.876846 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (193.114955ms) to execute\n2021-05-24 22:43:25.376076 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:493\" took too long (294.329078ms) to execute\n2021-05-24 22:43:27.328447 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:43:37.328356 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:43:47.328765 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:43:50.385785 I | mvcc: store.index: compact 198341\n2021-05-24 22:43:50.434097 I | mvcc: finished scheduled compaction at 198341 (took 45.878284ms)\n2021-05-24 22:43:57.328795 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:44:07.328316 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:44:17.328371 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:44:27.328677 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:44:37.328175 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:44:43.976064 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/nslifetest-40-8408\\\" \" with result \"range_response_count:1 size:436\" took too long (134.135967ms) to execute\n2021-05-24 22:44:43.976101 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (166.985514ms) to execute\n2021-05-24 22:44:44.376596 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/nslifetest-77-4101\\\" \" with result \"range_response_count:1 size:436\" took too long (285.952595ms) to execute\n2021-05-24 22:44:44.376677 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:645\" took too long (208.855432ms) to execute\n2021-05-24 22:44:44.376735 W | etcdserver: read-only range request \"key:\\\"/registry/poddisruptionbudgets/\\\" range_end:\\\"/registry/poddisruptionbudgets0\\\" count_only:true \" with result \"range_response_count:0 size:6\" took too long (285.269668ms) to execute\n2021-05-24 22:44:44.376798 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (103.576731ms) to execute\n2021-05-24 22:44:44.376929 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/nslifetest-75-6135\\\" \" with result \"range_response_count:1 size:436\" took too long (137.614256ms) to execute\n2021-05-24 22:44:44.376980 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/nslifetest-80-3823\\\" \" with result \"range_response_count:1 size:436\" took too long (187.578194ms) to execute\n2021-05-24 22:44:44.377106 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/nslifetest-35-9791\\\" \" with result \"range_response_count:1 size:436\" took too long (237.447221ms) to execute\n2021-05-24 22:44:44.676382 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (199.661443ms) to execute\n2021-05-24 22:44:44.677150 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/nslifetest-90-7763\\\" \" with result \"range_response_count:1 size:436\" took too long (288.273892ms) to execute\n2021-05-24 22:44:44.677207 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/nslifetest-59-9016\\\" \" with result \"range_response_count:1 size:436\" took too long (189.083371ms) to execute\n2021-05-24 22:44:44.677271 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/nslifetest-96-4523\\\" \" with result \"range_response_count:1 size:436\" took too long (239.379457ms) to execute\n2021-05-24 22:44:44.677387 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/nslifetest-6-2095\\\" \" with result \"range_response_count:1 size:433\" took too long (139.434467ms) to execute\n2021-05-24 22:44:44.677495 W | etcdserver: read-only range request \"key:\\\"/registry/rolebindings/\\\" range_end:\\\"/registry/rolebindings0\\\" count_only:true \" with result \"range_response_count:0 size:8\" took too long (264.854106ms) to execute\n2021-05-24 22:44:47.328990 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:44:51.478408 W | etcdserver: request \"header: txn: success: > failure: >>\" with result \"size:20\" took too long (100.742215ms) to execute\n2021-05-24 22:44:51.478626 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/nslifetest-41-6395/kube-root-ca.crt\\\" \" with result \"range_response_count:1 size:1386\" took too long (100.974608ms) to execute\n2021-05-24 22:44:51.478754 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/nslifetest-2-4748/default\\\" \" with result \"range_response_count:1 size:196\" took too long (100.95968ms) to execute\n2021-05-24 22:44:51.479035 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/nslifetest-51-9425/kube-root-ca.crt\\\" \" with result \"range_response_count:1 size:1386\" took too long (100.937435ms) to execute\n2021-05-24 22:44:52.176224 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-41-6395/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-41-63950\\\" \" with result \"range_response_count:0 size:6\" took too long (191.573038ms) to execute\n2021-05-24 22:44:52.176276 W | etcdserver: read-only range request \"key:\\\"/registry/endpointslices/nslifetest-51-9425/\\\" range_end:\\\"/registry/endpointslices/nslifetest-51-94250\\\" \" with result \"range_response_count:0 size:6\" took too long (192.526479ms) to execute\n2021-05-24 22:44:52.176337 W | etcdserver: read-only range request \"key:\\\"/registry/ingress/nslifetest-5-9746/\\\" range_end:\\\"/registry/ingress/nslifetest-5-97460\\\" \" with result \"range_response_count:0 size:6\" took too long (193.882453ms) to execute\n2021-05-24 22:44:52.176410 W | etcdserver: read-only range request \"key:\\\"/registry/rolebindings/nslifetest-40-8408/\\\" range_end:\\\"/registry/rolebindings/nslifetest-40-84080\\\" \" with result \"range_response_count:0 size:6\" took too long (194.078752ms) to execute\n2021-05-24 22:44:52.176480 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumeclaims/nslifetest-15-5807/\\\" range_end:\\\"/registry/persistentvolumeclaims/nslifetest-15-58070\\\" \" with result \"range_response_count:0 size:6\" took too long (193.649649ms) to execute\n2021-05-24 22:44:52.176521 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumeclaims/nslifetest-55-761/\\\" range_end:\\\"/registry/persistentvolumeclaims/nslifetest-55-7610\\\" \" with result \"range_response_count:0 size:6\" took too long (193.294393ms) to execute\n2021-05-24 22:44:52.176554 W | etcdserver: read-only range request \"key:\\\"/registry/poddisruptionbudgets/nslifetest-53-1826/\\\" range_end:\\\"/registry/poddisruptionbudgets/nslifetest-53-18260\\\" \" with result \"range_response_count:0 size:6\" took too long (193.531627ms) to execute\n2021-05-24 22:44:52.176618 W | etcdserver: read-only range request \"key:\\\"/registry/replicasets/nslifetest-2-4748/\\\" range_end:\\\"/registry/replicasets/nslifetest-2-47480\\\" \" with result \"range_response_count:0 size:6\" took too long (192.332008ms) to execute\n2021-05-24 22:44:52.176706 W | etcdserver: read-only range request \"key:\\\"/registry/projectcontour.io/httpproxies/nslifetest-17-3948/\\\" range_end:\\\"/registry/projectcontour.io/httpproxies/nslifetest-17-39480\\\" \" with result \"range_response_count:0 size:6\" took too long (192.059957ms) to execute\n2021-05-24 22:44:52.176772 W | etcdserver: read-only range request \"key:\\\"/registry/deployments/nslifetest-79-3380/\\\" range_end:\\\"/registry/deployments/nslifetest-79-33800\\\" \" with result \"range_response_count:0 size:6\" took too long (193.902328ms) to execute\n2021-05-24 22:44:52.177023 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:492\" took too long (164.986415ms) to execute\n2021-05-24 22:44:57.328982 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:45:02.876683 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-19-2488/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-19-24880\\\" \" with result \"range_response_count:0 size:6\" took too long (196.641099ms) to execute\n2021-05-24 22:45:02.877198 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-61-2402/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-61-24020\\\" \" with result \"range_response_count:0 size:6\" took too long (195.525792ms) to execute\n2021-05-24 22:45:02.877243 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-46-6329/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-46-63290\\\" \" with result \"range_response_count:0 size:6\" took too long (195.193932ms) to execute\n2021-05-24 22:45:02.877279 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-40-8304/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-40-83040\\\" \" with result \"range_response_count:0 size:6\" took too long (195.478605ms) to execute\n2021-05-24 22:45:02.877426 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-41-6032/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-41-60320\\\" \" with result \"range_response_count:0 size:6\" took too long (195.192316ms) to execute\n2021-05-24 22:45:03.078386 W | etcdserver: request \"header: txn: success:> failure:<>>\" with result \"size:18\" took too long (101.621265ms) to execute\n2021-05-24 22:45:03.079365 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-33-436/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-33-4360\\\" \" with result \"range_response_count:0 size:6\" took too long (199.391651ms) to execute\n2021-05-24 22:45:03.079485 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-50-8406/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-50-84060\\\" \" with result \"range_response_count:0 size:6\" took too long (199.554888ms) to execute\n2021-05-24 22:45:03.079613 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-23-5408/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-23-54080\\\" \" with result \"range_response_count:0 size:6\" took too long (199.612407ms) to execute\n2021-05-24 22:45:03.079721 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-29-1383/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-29-13830\\\" \" with result \"range_response_count:0 size:6\" took too long (199.95513ms) to execute\n2021-05-24 22:45:03.079851 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-39-4074/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-39-40740\\\" \" with result \"range_response_count:0 size:6\" took too long (100.781088ms) to execute\n2021-05-24 22:45:03.978848 W | etcdserver: request \"header: txn: success:> failure:<>>\" with result \"size:18\" took too long (199.89004ms) to execute\n2021-05-24 22:45:03.979247 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-76-1929/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-76-19290\\\" \" with result \"range_response_count:0 size:6\" took too long (296.823613ms) to execute\n2021-05-24 22:45:03.979571 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-87-8501/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-87-85010\\\" \" with result \"range_response_count:0 size:6\" took too long (247.078388ms) to execute\n2021-05-24 22:45:03.979655 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/nslifetest-28-6858/default\\\" \" with result \"range_response_count:1 size:198\" took too long (143.092735ms) to execute\n2021-05-24 22:45:03.979733 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-91-1721/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-91-17210\\\" \" with result \"range_response_count:0 size:6\" took too long (197.171526ms) to execute\n2021-05-24 22:45:03.979784 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/nslifetest-3-4710/default\\\" \" with result \"range_response_count:1 size:196\" took too long (242.915096ms) to execute\n2021-05-24 22:45:03.979884 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-70-1119/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-70-11190\\\" \" with result \"range_response_count:0 size:6\" took too long (147.502563ms) to execute\n2021-05-24 22:45:04.178591 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/nslifetest-49-7360/default\\\" \" with result \"range_response_count:1 size:198\" took too long (192.885553ms) to execute\n2021-05-24 22:45:04.178668 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:519\" took too long (141.832968ms) to execute\n2021-05-24 22:45:04.178938 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-77-803/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-77-8030\\\" \" with result \"range_response_count:0 size:6\" took too long (145.999239ms) to execute\n2021-05-24 22:45:04.378056 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-86-7751/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-86-77510\\\" \" with result \"range_response_count:0 size:6\" took too long (194.494537ms) to execute\n2021-05-24 22:45:04.378335 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/nslifetest-36-6207/default\\\" \" with result \"range_response_count:1 size:198\" took too long (191.826836ms) to execute\n2021-05-24 22:45:04.378457 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/nslifetest-88-8268/\\\" range_end:\\\"/registry/resourcequotas/nslifetest-88-82680\\\" \" with result \"range_response_count:0 size:6\" took too long (146.192582ms) to execute\n2021-05-24 22:45:04.378523 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/nslifetest-47-2532/default\\\" \" with result \"range_response_count:1 size:198\" took too long (142.388056ms) to execute\n2021-05-24 22:45:07.328456 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:45:17.328757 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:45:27.328661 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:45:32.606713 I | etcdserver: start to snapshot (applied: 210021, lastsnap: 200020)\n2021-05-24 22:45:32.608915 I | etcdserver: saved snapshot at index 210021\n2021-05-24 22:45:32.609373 I | etcdserver: compacted raft log at 205021\n2021-05-24 22:45:37.328321 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:45:47.328408 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:45:49.643262 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-0000000000027110.snap successfully\n2021-05-24 22:45:57.329148 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:45:57.600533 I | wal: segmented wal file /var/lib/etcd/member/wal/0000000000000006-0000000000033760.wal is created\n2021-05-24 22:46:07.329188 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:46:16.377595 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/deployment-4205/default\\\" \" with result \"range_response_count:1 size:228\" took too long (165.763791ms) to execute\n2021-05-24 22:46:16.377729 W | etcdserver: request \"header: txn: success:> failure:<>>\" with result \"size:18\" took too long (158.56572ms) to execute\n2021-05-24 22:46:16.377800 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-c9dbff545-9tbmp\\\" \" with result \"range_response_count:1 size:1992\" took too long (160.136295ms) to execute\n2021-05-24 22:46:16.378102 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/deployment-4205/default\\\" \" with result \"range_response_count:1 size:228\" took too long (134.074679ms) to execute\n2021-05-24 22:46:16.776236 W | etcdserver: read-only range request \"key:\\\"/registry/k8s.cni.cncf.io/network-attachment-definitions/\\\" range_end:\\\"/registry/k8s.cni.cncf.io/network-attachment-definitions0\\\" count_only:true \" with result \"range_response_count:0 size:6\" took too long (474.012163ms) to execute\n2021-05-24 22:46:16.776391 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-6d9cb54865-gwt2j\\\" \" with result \"range_response_count:1 size:1861\" took too long (453.131623ms) to execute\n2021-05-24 22:46:16.776453 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (200.257083ms) to execute\n2021-05-24 22:46:16.780955 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-c9dbff545-8pfr8\\\" \" with result \"range_response_count:1 size:2557\" took too long (337.370104ms) to execute\n2021-05-24 22:46:16.781144 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-6d9cb54865-vbrc7\\\" \" with result \"range_response_count:1 size:1861\" took too long (401.742693ms) to execute\n2021-05-24 22:46:16.781310 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-c9dbff545-k854m\\\" \" with result \"range_response_count:1 size:1991\" took too long (402.207418ms) to execute\n2021-05-24 22:46:16.781457 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (106.208538ms) to execute\n2021-05-24 22:46:16.781743 W | etcdserver: read-only range request \"key:\\\"/registry/poddisruptionbudgets/\\\" range_end:\\\"/registry/poddisruptionbudgets0\\\" count_only:true \" with result \"range_response_count:0 size:8\" took too long (335.046414ms) to execute\n2021-05-24 22:46:16.782606 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/deployment-4205/default\\\" \" with result \"range_response_count:1 size:228\" took too long (370.927363ms) to execute\n2021-05-24 22:46:16.782976 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/deployment-4205/default\\\" \" with result \"range_response_count:1 size:228\" took too long (171.331148ms) to execute\n2021-05-24 22:46:17.076440 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (199.699481ms) to execute\n2021-05-24 22:46:17.077172 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (265.408923ms) to execute\n2021-05-24 22:46:17.476532 W | etcdserver: request \"header: txn: success:> failure:<>>\" with result \"size:18\" took too long (199.719966ms) to execute\n2021-05-24 22:46:17.477306 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-6d9cb54865-vbrc7\\\" \" with result \"range_response_count:1 size:1861\" took too long (473.623406ms) to execute\n2021-05-24 22:46:17.477354 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-c9dbff545-44qcv\\\" \" with result \"range_response_count:0 size:6\" took too long (402.861253ms) to execute\n2021-05-24 22:46:17.477420 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/default\\\" \" with result \"range_response_count:1 size:342\" took too long (602.629556ms) to execute\n2021-05-24 22:46:17.477462 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-6d9cb54865-gwt2j\\\" \" with result \"range_response_count:1 size:1861\" took too long (496.990938ms) to execute\n2021-05-24 22:46:17.477498 W | etcdserver: read-only range request \"key:\\\"/registry/jobs/cronjob-15/\\\" range_end:\\\"/registry/jobs/cronjob-150\\\" \" with result \"range_response_count:1 size:1655\" took too long (575.44197ms) to execute\n2021-05-24 22:46:17.477562 W | etcdserver: read-only range request \"key:\\\"/registry/jobs/ttlafterfinished-8392/rand-non-local\\\" \" with result \"range_response_count:1 size:1899\" took too long (511.523325ms) to execute\n2021-05-24 22:46:17.477596 W | etcdserver: read-only range request \"key:\\\"/registry/pods/local-path-storage/create-pvc-2eda23e9-520a-4669-ab94-e9c45fa73061\\\" \" with result \"range_response_count:1 size:3248\" took too long (445.947511ms) to execute\n2021-05-24 22:46:17.477613 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-564955bcb5-qwvcs\\\" \" with result \"range_response_count:1 size:1928\" took too long (461.802137ms) to execute\n2021-05-24 22:46:17.477679 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:493\" took too long (612.048434ms) to execute\n2021-05-24 22:46:17.477711 W | etcdserver: read-only range request \"key:\\\"/registry/cronjobs/cronjob-8072/forbid\\\" \" with result \"range_response_count:1 size:1283\" took too long (401.750081ms) to execute\n2021-05-24 22:46:17.477745 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-564955bcb5-8w7td\\\" \" with result \"range_response_count:1 size:1928\" took too long (459.905077ms) to execute\n2021-05-24 22:46:17.477811 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/deployment-4205/default\\\" \" with result \"range_response_count:1 size:228\" took too long (665.271021ms) to execute\n2021-05-24 22:46:17.477909 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-c9dbff545-k854m\\\" \" with result \"range_response_count:1 size:1991\" took too long (456.887659ms) to execute\n2021-05-24 22:46:17.478055 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:520\" took too long (611.678192ms) to execute\n2021-05-24 22:46:17.478394 W | etcdserver: read-only range request \"key:\\\"/registry/jobs/cronjob-3423/\\\" range_end:\\\"/registry/jobs/cronjob-34230\\\" \" with result \"range_response_count:1 size:1883\" took too long (584.743276ms) to execute\n2021-05-24 22:46:17.478514 W | etcdserver: read-only range request \"key:\\\"/registry/jobs/job-4536/exceed-active-deadline\\\" \" with result \"range_response_count:1 size:1822\" took too long (504.860523ms) to execute\n2021-05-24 22:46:17.478646 W | etcdserver: read-only range request \"key:\\\"/registry/jobs/job-7126/fail-once-non-local\\\" \" with result \"range_response_count:1 size:1883\" took too long (417.461956ms) to execute\n2021-05-24 22:46:17.478771 W | etcdserver: read-only range request \"key:\\\"/registry/pods/disruption-5162/\\\" range_end:\\\"/registry/pods/disruption-51620\\\" \" with result \"range_response_count:10 size:30141\" took too long (471.960526ms) to execute\n2021-05-24 22:46:17.576214 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-24 22:46:17.776748 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (200.126877ms) to execute\n2021-05-24 22:46:17.777628 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-847dcfb7fb-skq6d\\\" \" with result \"range_response_count:0 size:6\" took too long (566.462523ms) to execute\n2021-05-24 22:46:17.777672 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-564955bcb5-8hs58\\\" \" with result \"range_response_count:0 size:6\" took too long (700.091658ms) to execute\n2021-05-24 22:46:17.777811 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumeclaims/statefulset-6933/datadir-ss-0\\\" \" with result \"range_response_count:1 size:1243\" took too long (365.541562ms) to execute\n2021-05-24 22:46:17.777844 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-c9dbff545-9tbmp\\\" \" with result \"range_response_count:1 size:2557\" took too long (697.405073ms) to execute\n2021-05-24 22:46:17.777987 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-564955bcb5-lhlds\\\" \" with result \"range_response_count:0 size:6\" took too long (697.699112ms) to execute\n2021-05-24 22:46:17.778077 W | etcdserver: read-only range request \"key:\\\"/registry/deployments/deployment-4205/webserver\\\" \" with result \"range_response_count:1 size:2327\" took too long (427.589127ms) to execute\n2021-05-24 22:46:18.076393 W | etcdserver: read-only range request \"key:\\\"/registry/services/specs/default/kubernetes\\\" \" with result \"range_response_count:1 size:666\" took too long (596.402143ms) to execute\n2021-05-24 22:46:18.076485 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:22528\" took too long (593.98776ms) to execute\n2021-05-24 22:46:18.076566 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-c9dbff545-k854m\\\" \" with result \"range_response_count:1 size:1991\" took too long (361.461408ms) to execute\n2021-05-24 22:46:18.076599 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-564955bcb5-qwvcs\\\" \" with result \"range_response_count:1 size:1928\" took too long (385.266197ms) to execute\n2021-05-24 22:46:18.076622 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (401.732179ms) to execute\n2021-05-24 22:46:18.076706 W | etcdserver: read-only range request \"key:\\\"/registry/events/deployment-4205/webserver-6d9cb54865-ptsxl.1682222901558b92\\\" \" with result \"range_response_count:1 size:806\" took too long (595.114419ms) to execute\n2021-05-24 22:46:18.076739 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/projectcontour/leader-elect\\\" \" with result \"range_response_count:1 size:544\" took too long (561.475538ms) to execute\n2021-05-24 22:46:18.076833 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-6d9cb54865-mcqm7\\\" \" with result \"range_response_count:1 size:1860\" took too long (597.718619ms) to execute\n2021-05-24 22:46:18.076873 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-564955bcb5-8w7td\\\" \" with result \"range_response_count:1 size:1928\" took too long (385.129338ms) to execute\n2021-05-24 22:46:18.077001 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:645\" took too long (561.468358ms) to execute\n2021-05-24 22:46:18.077097 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-6d9cb54865-vbrc7\\\" \" with result \"range_response_count:1 size:1861\" took too long (360.431961ms) to execute\n2021-05-24 22:46:18.077216 W | etcdserver: request \"header: txn: success:> failure:<>>\" with result \"size:18\" took too long (100.266107ms) to execute\n2021-05-24 22:46:18.776708 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-6d9cb54865-kcmgf\\\" \" with result \"range_response_count:0 size:6\" took too long (995.071665ms) to execute\n2021-05-24 22:46:18.776854 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-6d9cb54865-mcqm7\\\" \" with result \"range_response_count:1 size:1860\" took too long (996.452656ms) to execute\n2021-05-24 22:46:18.777028 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (399.612238ms) to execute\n2021-05-24 22:46:18.777135 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumes/pvc-8f9384be-8175-46c9-be7a-eb056f98452f\\\" \" with result \"range_response_count:1 size:1297\" took too long (965.574257ms) to execute\n2021-05-24 22:46:18.777241 W | etcdserver: read-only range request \"key:\\\"/registry/events/deployment-4205/webserver.16822227f03855b9\\\" \" with result \"range_response_count:1 size:797\" took too long (995.882769ms) to execute\n2021-05-24 22:46:18.778368 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-c9dbff545-7bthn\\\" \" with result \"range_response_count:1 size:2557\" took too long (693.719469ms) to execute\n2021-05-24 22:46:18.778432 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-6d9cb54865-mcqm7\\\" \" with result \"range_response_count:1 size:1860\" took too long (350.785045ms) to execute\n2021-05-24 22:46:18.778463 W | etcdserver: read-only range request \"key:\\\"/registry/pods/local-path-storage/create-pvc-2eda23e9-520a-4669-ab94-e9c45fa73061\\\" \" with result \"range_response_count:1 size:3248\" took too long (293.089735ms) to execute\n2021-05-24 22:46:18.778567 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/job-4536\\\" \" with result \"range_response_count:1 size:449\" took too long (694.143714ms) to execute\n2021-05-24 22:46:18.778650 W | etcdserver: read-only range request \"key:\\\"/registry/masterleases/172.18.0.3\\\" \" with result \"range_response_count:1 size:133\" took too long (699.848558ms) to execute\n2021-05-24 22:46:18.778724 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:6\" took too long (103.49127ms) to execute\n2021-05-24 22:46:19.177561 W | etcdserver: request \"header: txn: success:> failure: >>\" with result \"size:18\" took too long (100.261346ms) to execute\n2021-05-24 22:46:19.179692 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-4205/webserver-6d9cb54865-qstd6\\\" \" with result \"range_response_count:0 size:6\" took too long (393.905545ms) to execute\n2021-05-24 22:46:19.576867 W | etcdserver: request \"header: txn: success:> failure: