Running Suite: Kubernetes e2e suite =================================== Random Seed: 1652483994 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes May 13 23:19:56.138: INFO: >>> kubeConfig: /root/.kube/config May 13 23:19:56.144: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 13 23:19:56.174: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 13 23:19:56.242: INFO: The status of Pod cmk-init-discover-node1-m2p59 is Succeeded, skipping waiting May 13 23:19:56.242: INFO: The status of Pod cmk-init-discover-node2-hm7r7 is Succeeded, skipping waiting May 13 23:19:56.242: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 13 23:19:56.242: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 13 23:19:56.242: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 13 23:19:56.260: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 13 23:19:56.260: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 13 23:19:56.260: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 13 23:19:56.260: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 13 23:19:56.260: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 13 23:19:56.260: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 13 23:19:56.260: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 13 23:19:56.260: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 13 23:19:56.260: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 13 23:19:56.260: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 13 23:19:56.260: INFO: e2e test version: v1.21.9 May 13 23:19:56.262: INFO: kube-apiserver version: v1.21.1 May 13 23:19:56.262: INFO: >>> kubeConfig: /root/.kube/config May 13 23:19:56.267: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSS ------------------------------ May 13 23:19:56.279: INFO: >>> kubeConfig: /root/.kube/config May 13 23:19:56.301: INFO: Cluster IP family: ipv4 S ------------------------------ May 13 23:19:56.281: INFO: >>> kubeConfig: /root/.kube/config May 13 23:19:56.303: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ May 13 23:19:56.288: INFO: >>> kubeConfig: /root/.kube/config May 13 23:19:56.307: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ May 13 23:19:56.290: INFO: >>> kubeConfig: /root/.kube/config May 13 23:19:56.312: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ May 13 23:19:56.293: INFO: >>> kubeConfig: /root/.kube/config May 13 23:19:56.315: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ May 13 23:19:56.291: INFO: >>> kubeConfig: /root/.kube/config May 13 23:19:56.318: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ May 13 23:19:56.304: INFO: >>> kubeConfig: /root/.kube/config May 13 23:19:56.322: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ May 13 23:19:56.309: INFO: >>> kubeConfig: /root/.kube/config May 13 23:19:56.329: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ May 13 23:19:56.308: INFO: >>> kubeConfig: /root/.kube/config May 13 23:19:56.332: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:19:56.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp W0513 23:19:56.573220 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:19:56.573: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:19:56.575: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 May 13 23:19:56.577: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:19:56.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-6457" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should only target nodes with endpoints [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:19:56.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename firewall-test W0513 23:19:56.631763 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:19:56.632: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:19:56.633: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61 May 13 23:19:56.635: INFO: Only supported for providers [gce] (not local) [AfterEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:19:56.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "firewall-test-6775" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 control plane should not expose well-known ports [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214 Only supported for providers [gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:19:56.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp W0513 23:19:56.719476 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:19:56.719: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:19:56.721: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 May 13 23:19:56.723: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:19:56.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-5635" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should work from pods [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:19:56.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should provide unchanging, static URL paths for kubernetes api services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112 STEP: testing: /healthz STEP: testing: /api STEP: testing: /apis STEP: testing: /metrics STEP: testing: /openapi/v2 STEP: testing: /version STEP: testing: /logs [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:19:56.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-6379" for this suite. •SSSS ------------------------------ {"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":1,"skipped":100,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:19:56.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 May 13 23:19:57.002: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:19:57.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-231" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should work for type=NodePort [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:19:57.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Provider:GCE] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 May 13 23:19:57.049: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:19:57.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9850" for this suite. S [SKIPPING] [0.030 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Provider:GCE] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:19:56.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W0513 23:19:56.536764 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:19:56.537: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:19:56.538: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should allow pods to hairpin back to themselves through services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986 STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-16 May 13 23:19:56.546: INFO: hairpin-test cluster ip: 10.233.58.53 STEP: creating a client/server pod May 13 23:19:56.559: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) May 13 23:19:58.563: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:00.566: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:02.565: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:04.563: INFO: The status of Pod hairpin is Running (Ready = true) STEP: waiting for the service to expose an endpoint STEP: waiting up to 3m0s for service hairpin-test in namespace services-16 to expose endpoints map[hairpin:[8080]] May 13 23:20:04.570: INFO: successfully validated that service hairpin-test in namespace services-16 exposes endpoints map[hairpin:[8080]] STEP: Checking if the pod can reach itself May 13 23:20:05.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-16 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' May 13 23:20:05.819: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 hairpin-test 8080\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n" May 13 23:20:05.820: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 13 23:20:05.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-16 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.58.53 8080' May 13 23:20:06.073: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.58.53 8080\nConnection to 10.233.58.53 8080 port [tcp/http-alt] succeeded!\n" May 13 23:20:06.073: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:20:06.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-16" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:9.568 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should allow pods to hairpin back to themselves through services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986 ------------------------------ {"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":1,"skipped":52,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:19:56.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W0513 23:19:56.365186 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:19:56.365: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:19:56.368: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should release NodePorts on delete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561 STEP: creating service nodeport-reuse with type NodePort in namespace services-6830 STEP: deleting original service nodeport-reuse May 13 23:19:56.401: INFO: Creating new host exec pod May 13 23:19:56.416: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) May 13 23:19:58.419: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:00.419: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:02.420: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:04.420: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:06.419: INFO: The status of Pod hostexec is Running (Ready = true) May 13 23:20:06.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6830 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :31973' | tail -n +2 | grep LISTEN' May 13 23:20:07.289: INFO: stderr: "+ tail -n +2\n+ grep LISTEN\n+ ss -ant46 'sport = :31973'\n" May 13 23:20:07.289: INFO: stdout: "" STEP: creating service nodeport-reuse with same NodePort 31973 STEP: deleting service nodeport-reuse in namespace services-6830 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:20:07.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6830" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:10.978 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should release NodePorts on delete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561 ------------------------------ {"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":1,"skipped":5,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:19:57.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W0513 23:19:57.240875 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:19:57.241: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:19:57.242: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903 May 13 23:19:57.257: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 13 23:19:59.261: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:01.261: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:03.263: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) May 13 23:20:03.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7735 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 13 23:20:03.836: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" May 13 23:20:03.836: INFO: stdout: "iptables" May 13 23:20:03.836: INFO: proxyMode: iptables May 13 23:20:03.843: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 13 23:20:03.845: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-7735 May 13 23:20:03.851: INFO: sourceip-test cluster ip: 10.233.3.208 STEP: Picking 2 Nodes to test whether source IP is preserved or not STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip May 13 23:20:03.868: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:05.872: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:07.872: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:09.871: INFO: The status of Pod echo-sourceip is Running (Ready = true) STEP: waiting up to 3m0s for service sourceip-test in namespace services-7735 to expose endpoints map[echo-sourceip:[8080]] May 13 23:20:09.880: INFO: successfully validated that service sourceip-test in namespace services-7735 exposes endpoints map[echo-sourceip:[8080]] STEP: Creating pause pod deployment May 13 23:20:09.887: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)} May 13 23:20:11.891: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788080809, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788080809, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788080809, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788080809, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6bd764684f\" is progressing."}}, CollisionCount:(*int32)(nil)} May 13 23:20:13.898: INFO: Waiting up to 2m0s to get response from 10.233.3.208:8080 May 13 23:20:13.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7735 exec pause-pod-6bd764684f-g27zb -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.3.208:8080/clientip' May 13 23:20:14.154: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.3.208:8080/clientip\n" May 13 23:20:14.154: INFO: stdout: "10.244.4.36:58202" STEP: Verifying the preserved source ip May 13 23:20:14.154: INFO: Waiting up to 2m0s to get response from 10.233.3.208:8080 May 13 23:20:14.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7735 exec pause-pod-6bd764684f-v5z4n -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.3.208:8080/clientip' May 13 23:20:14.466: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.3.208:8080/clientip\n" May 13 23:20:14.466: INFO: stdout: "10.244.3.151:53630" STEP: Verifying the preserved source ip May 13 23:20:14.466: INFO: Deleting deployment May 13 23:20:14.470: INFO: Cleaning up the echo server pod May 13 23:20:14.476: INFO: Cleaning up the sourceip test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:20:14.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7735" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:17.318 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903 ------------------------------ {"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":1,"skipped":372,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Netpol API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:20:14.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename netpol STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating NetworkPolicy API operations /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 13 23:20:14.597: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 13 23:20:14.603: INFO: starting watch STEP: patching STEP: updating May 13 23:20:14.610: INFO: waiting for watch events with expected annotations May 13 23:20:14.610: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"} May 13 23:20:14.610: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Netpol API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:20:14.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "netpol-9893" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":2,"skipped":403,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:19:56.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W0513 23:19:56.735139 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:19:56.735: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:19:56.737: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should be able to handle large requests: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451 STEP: Performing setup for networking test in namespace nettest-4769 STEP: creating a selector STEP: Creating the service pods in kubernetes May 13 23:19:56.841: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 13 23:19:56.872: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:19:58.876: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:00.877: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:02.877: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:04.875: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:06.876: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 23:20:08.881: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 23:20:10.880: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 23:20:12.877: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 23:20:14.880: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 23:20:16.876: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 23:20:18.877: INFO: The status of Pod netserver-0 is Running (Ready = true) May 13 23:20:18.882: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 13 23:20:22.906: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses May 13 23:20:22.906: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 13 23:20:22.917: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:20:22.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-4769" for this suite. S [SKIPPING] [26.212 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should be able to handle large requests: http [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:19:56.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W0513 23:19:56.623521 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:19:56.623: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:19:56.625: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should support basic nodePort: udp functionality /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387 STEP: Performing setup for networking test in namespace nettest-1468 STEP: creating a selector STEP: Creating the service pods in kubernetes May 13 23:19:56.735: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 13 23:19:56.766: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:19:58.770: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:00.771: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:02.770: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:04.769: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:06.770: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:08.772: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 23:20:10.772: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 23:20:12.770: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 23:20:14.770: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 23:20:16.770: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 23:20:18.772: INFO: The status of Pod netserver-0 is Running (Ready = true) May 13 23:20:18.777: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 13 23:20:24.812: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses May 13 23:20:24.812: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 13 23:20:24.818: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:20:24.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-1468" for this suite. S [SKIPPING] [28.228 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should support basic nodePort: udp functionality [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:19:56.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for node-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 STEP: Performing setup for networking test in namespace nettest-8754 STEP: creating a selector STEP: Creating the service pods in kubernetes May 13 23:19:57.064: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 13 23:19:57.109: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:19:59.113: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:01.113: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:03.116: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:05.113: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:07.115: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 13 23:20:09.118: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 23:20:11.112: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 23:20:13.114: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 23:20:15.113: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 23:20:17.113: INFO: The status of Pod netserver-0 is Running (Ready = false) May 13 23:20:19.114: INFO: The status of Pod netserver-0 is Running (Ready = true) May 13 23:20:19.120: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 13 23:20:27.159: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses May 13 23:20:27.159: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 13 23:20:27.167: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:20:27.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-8754" for this suite. S [SKIPPING] [30.242 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for node-Service: http [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:20:27.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85 May 13 23:20:27.388: INFO: (0) /api/v1/nodes/node1:10250/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename networkpolicies
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
May 13 23:20:27.576: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
May 13 23:20:27.580: INFO: starting watch
STEP: patching
STEP: updating
May 13 23:20:27.587: INFO: waiting for watch events with expected annotations
May 13 23:20:27.587: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
May 13 23:20:27.588: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:20:27.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-4688" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":2,"skipped":341,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:19:57.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for service endpoints using hostNetwork
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474
STEP: Performing setup for networking test in namespace nettest-8705
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 13 23:19:57.277: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:19:57.313: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:19:59.317: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:01.317: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:03.318: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:05.317: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:07.316: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:09.317: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:11.316: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:13.317: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:15.317: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:17.316: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 13 23:20:17.321: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 13 23:20:19.325: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 13 23:20:29.360: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 13 23:20:29.360: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:20:29.367: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:20:29.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8705" for this suite.


S [SKIPPING] [32.232 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for service endpoints using hostNetwork [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:06.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
STEP: Performing setup for networking test in namespace nettest-9274
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 13 23:20:06.482: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:20:06.512: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:08.517: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:10.518: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:12.517: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:14.516: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:16.516: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:18.518: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:20.519: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:22.519: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:24.516: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:26.516: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:28.519: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 13 23:20:28.524: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 13 23:20:34.960: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 13 23:20:34.960: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:20:34.967: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:20:34.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9274" for this suite.


S [SKIPPING] [28.629 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:27.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
STEP: Preparing a test DNS service with injected DNS names...
May 13 23:20:28.024: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-07e35598-81ca-46eb-8196-06a3d3355458  dns-5308  c0e8bdb1-66d0-4af8-80cb-e9e38adccadc 73052 0 2022-05-13 23:20:28 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2022-05-13 23:20:28 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-7zvnl,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-cqfjq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-cqfjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 13 23:20:36.034: INFO: testServerIP is 10.244.3.157
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
May 13 23:20:36.044: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils  dns-5308  32cebdd5-1078-4dfc-924f-db9e064a4037 73338 0 2022-05-13 23:20:36 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2022-05-13 23:20:36 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jhmr2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jhmr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.3.157],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS option is configured on pod...
May 13 23:20:40.050: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-5308 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:40.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized name server and search path are working...
May 13 23:20:40.161: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-5308 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:40.161: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:40.327: INFO: Deleting pod e2e-dns-utils...
May 13 23:20:40.336: INFO: Deleting pod e2e-configmap-dns-server-07e35598-81ca-46eb-8196-06a3d3355458...
May 13 23:20:40.342: INFO: Deleting configmap e2e-coredns-configmap-7zvnl...
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:20:40.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5308" for this suite.


• [SLOW TEST:12.367 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":3,"skipped":540,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:19:56.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
W0513 23:19:56.662954      39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May 13 23:19:56.663: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May 13 23:19:56.664: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-8233
STEP: creating a client pod for probing the service svc-udp
May 13 23:19:56.688: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May 13 23:19:58.692: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:00.693: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:02.693: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:04.691: INFO: The status of Pod pod-client is Running (Ready = true)
May 13 23:20:05.013: INFO: Pod client logs: Fri May 13 23:20:02 UTC 2022
Fri May 13 23:20:02 UTC 2022 Try: 1

Fri May 13 23:20:02 UTC 2022 Try: 2

Fri May 13 23:20:02 UTC 2022 Try: 3

Fri May 13 23:20:02 UTC 2022 Try: 4

Fri May 13 23:20:02 UTC 2022 Try: 5

Fri May 13 23:20:02 UTC 2022 Try: 6

Fri May 13 23:20:02 UTC 2022 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
May 13 23:20:05.025: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:07.029: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:09.030: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:11.028: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:13.032: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-8233 to expose endpoints map[pod-server-1:[80]]
May 13 23:20:13.043: INFO: successfully validated that service svc-udp in namespace conntrack-8233 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
STEP: creating a second backend pod pod-server-2 for the service svc-udp
May 13 23:20:23.069: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:25.073: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:27.073: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:29.074: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:31.073: INFO: The status of Pod pod-server-2 is Running (Ready = true)
May 13 23:20:31.075: INFO: Cleaning up pod-server-1 pod
May 13 23:20:31.082: INFO: Waiting for pod pod-server-1 to disappear
May 13 23:20:31.084: INFO: Pod pod-server-1 no longer exists
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-8233 to expose endpoints map[pod-server-2:[80]]
May 13 23:20:31.091: INFO: successfully validated that service svc-udp in namespace conntrack-8233 exposes endpoints map[pod-server-2:[80]]
STEP: checking client pod connected to the backend 2 on Node IP 10.10.190.208
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:20:41.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-8233" for this suite.


• [SLOW TEST:44.474 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":1,"skipped":91,"failed":0}

SSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:41.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
May 13 23:20:41.175: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:20:41.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-7411" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should handle updates to ExternalTrafficPolicy field [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:41.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should prevent NodePort collisions
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440
STEP: creating service nodeport-collision-1 with type NodePort in namespace services-8661
STEP: creating service nodeport-collision-2 with conflicting NodePort
STEP: deleting service nodeport-collision-1 to release NodePort
STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort
STEP: deleting service nodeport-collision-2 in namespace services-8661
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:20:41.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8661" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":2,"skipped":458,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:14.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: udp [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
STEP: Performing setup for networking test in namespace nettest-7962
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 13 23:20:14.803: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:20:14.834: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:16.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:18.839: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:20.839: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:22.838: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:24.837: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:26.839: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:28.842: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:30.868: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:32.841: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:34.957: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:36.839: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 13 23:20:36.843: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 13 23:20:42.879: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 13 23:20:42.879: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:20:42.886: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:20:42.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7962" for this suite.


S [SKIPPING] [28.228 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: udp [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:29.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename no-snat-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
STEP: creating a test pod on each Node
STEP: waiting for all of the no-snat-test pods to be scheduled and running
STEP: sending traffic from each pod to the others and checking that SNAT does not occur
May 13 23:20:39.465: INFO: Waiting up to 2m0s to get response from 10.244.1.5:8080
May 13 23:20:39.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-test7z42z -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.5:8080/clientip'
May 13 23:20:39.752: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.5:8080/clientip\n"
May 13 23:20:39.752: INFO: stdout: "10.244.3.159:54386"
STEP: Verifying the preserved source ip
May 13 23:20:39.752: INFO: Waiting up to 2m0s to get response from 10.244.0.7:8080
May 13 23:20:39.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-test7z42z -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip'
May 13 23:20:40.033: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip\n"
May 13 23:20:40.033: INFO: stdout: "10.244.3.159:50562"
STEP: Verifying the preserved source ip
May 13 23:20:40.033: INFO: Waiting up to 2m0s to get response from 10.244.4.48:8080
May 13 23:20:40.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-test7z42z -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.48:8080/clientip'
May 13 23:20:40.357: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.48:8080/clientip\n"
May 13 23:20:40.357: INFO: stdout: "10.244.3.159:50290"
STEP: Verifying the preserved source ip
May 13 23:20:40.357: INFO: Waiting up to 2m0s to get response from 10.244.2.6:8080
May 13 23:20:40.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-test7z42z -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip'
May 13 23:20:40.923: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip\n"
May 13 23:20:40.923: INFO: stdout: "10.244.3.159:51520"
STEP: Verifying the preserved source ip
May 13 23:20:40.923: INFO: Waiting up to 2m0s to get response from 10.244.3.159:8080
May 13 23:20:40.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-testcrjpj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.159:8080/clientip'
May 13 23:20:41.230: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.159:8080/clientip\n"
May 13 23:20:41.230: INFO: stdout: "10.244.1.5:59594"
STEP: Verifying the preserved source ip
May 13 23:20:41.230: INFO: Waiting up to 2m0s to get response from 10.244.0.7:8080
May 13 23:20:41.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-testcrjpj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip'
May 13 23:20:41.476: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip\n"
May 13 23:20:41.476: INFO: stdout: "10.244.1.5:46690"
STEP: Verifying the preserved source ip
May 13 23:20:41.476: INFO: Waiting up to 2m0s to get response from 10.244.4.48:8080
May 13 23:20:41.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-testcrjpj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.48:8080/clientip'
May 13 23:20:41.714: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.48:8080/clientip\n"
May 13 23:20:41.714: INFO: stdout: "10.244.1.5:45728"
STEP: Verifying the preserved source ip
May 13 23:20:41.714: INFO: Waiting up to 2m0s to get response from 10.244.2.6:8080
May 13 23:20:41.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-testcrjpj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip'
May 13 23:20:41.963: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip\n"
May 13 23:20:41.963: INFO: stdout: "10.244.1.5:47142"
STEP: Verifying the preserved source ip
May 13 23:20:41.963: INFO: Waiting up to 2m0s to get response from 10.244.3.159:8080
May 13 23:20:41.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-testp4x6t -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.159:8080/clientip'
May 13 23:20:42.215: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.159:8080/clientip\n"
May 13 23:20:42.215: INFO: stdout: "10.244.0.7:44190"
STEP: Verifying the preserved source ip
May 13 23:20:42.215: INFO: Waiting up to 2m0s to get response from 10.244.1.5:8080
May 13 23:20:42.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-testp4x6t -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.5:8080/clientip'
May 13 23:20:42.470: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.5:8080/clientip\n"
May 13 23:20:42.470: INFO: stdout: "10.244.0.7:52602"
STEP: Verifying the preserved source ip
May 13 23:20:42.470: INFO: Waiting up to 2m0s to get response from 10.244.4.48:8080
May 13 23:20:42.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-testp4x6t -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.48:8080/clientip'
May 13 23:20:42.722: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.48:8080/clientip\n"
May 13 23:20:42.722: INFO: stdout: "10.244.0.7:48618"
STEP: Verifying the preserved source ip
May 13 23:20:42.722: INFO: Waiting up to 2m0s to get response from 10.244.2.6:8080
May 13 23:20:42.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-testp4x6t -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip'
May 13 23:20:42.973: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip\n"
May 13 23:20:42.973: INFO: stdout: "10.244.0.7:60686"
STEP: Verifying the preserved source ip
May 13 23:20:42.973: INFO: Waiting up to 2m0s to get response from 10.244.3.159:8080
May 13 23:20:42.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-testrp4zv -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.159:8080/clientip'
May 13 23:20:43.237: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.159:8080/clientip\n"
May 13 23:20:43.237: INFO: stdout: "10.244.4.48:49028"
STEP: Verifying the preserved source ip
May 13 23:20:43.237: INFO: Waiting up to 2m0s to get response from 10.244.1.5:8080
May 13 23:20:43.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-testrp4zv -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.5:8080/clientip'
May 13 23:20:43.508: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.5:8080/clientip\n"
May 13 23:20:43.508: INFO: stdout: "10.244.4.48:37590"
STEP: Verifying the preserved source ip
May 13 23:20:43.508: INFO: Waiting up to 2m0s to get response from 10.244.0.7:8080
May 13 23:20:43.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-testrp4zv -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip'
May 13 23:20:43.771: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip\n"
May 13 23:20:43.771: INFO: stdout: "10.244.4.48:58512"
STEP: Verifying the preserved source ip
May 13 23:20:43.771: INFO: Waiting up to 2m0s to get response from 10.244.2.6:8080
May 13 23:20:43.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-testrp4zv -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip'
May 13 23:20:44.031: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip\n"
May 13 23:20:44.031: INFO: stdout: "10.244.4.48:41964"
STEP: Verifying the preserved source ip
May 13 23:20:44.031: INFO: Waiting up to 2m0s to get response from 10.244.3.159:8080
May 13 23:20:44.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-testww2qr -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.159:8080/clientip'
May 13 23:20:44.323: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.159:8080/clientip\n"
May 13 23:20:44.323: INFO: stdout: "10.244.2.6:42464"
STEP: Verifying the preserved source ip
May 13 23:20:44.323: INFO: Waiting up to 2m0s to get response from 10.244.1.5:8080
May 13 23:20:44.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-testww2qr -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.5:8080/clientip'
May 13 23:20:44.561: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.5:8080/clientip\n"
May 13 23:20:44.561: INFO: stdout: "10.244.2.6:48692"
STEP: Verifying the preserved source ip
May 13 23:20:44.561: INFO: Waiting up to 2m0s to get response from 10.244.0.7:8080
May 13 23:20:44.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-testww2qr -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip'
May 13 23:20:44.802: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip\n"
May 13 23:20:44.802: INFO: stdout: "10.244.2.6:37298"
STEP: Verifying the preserved source ip
May 13 23:20:44.802: INFO: Waiting up to 2m0s to get response from 10.244.4.48:8080
May 13 23:20:44.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-812 exec no-snat-testww2qr -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.48:8080/clientip'
May 13 23:20:45.056: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.48:8080/clientip\n"
May 13 23:20:45.056: INFO: stdout: "10.244.2.6:59888"
STEP: Verifying the preserved source ip
[AfterEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:20:45.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "no-snat-test-812" for this suite.


• [SLOW TEST:15.684 seconds]
[sig-network] NoSNAT [Feature:NoSNAT] [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
------------------------------
{"msg":"PASSED [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT","total":-1,"completed":1,"skipped":302,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:42.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
STEP: creating a service with no endpoints
STEP: creating execpod-noendpoints on node node1
May 13 23:20:42.088: INFO: Creating new exec pod
May 13 23:20:48.107: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node node1
May 13 23:20:48.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3999 exec execpod-noendpointsdtnsz -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
May 13 23:20:49.817: INFO: rc: 1
May 13 23:20:49.817: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3999 exec execpod-noendpointsdtnsz -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:20:49.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3999" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:7.773 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":3,"skipped":523,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:49.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should check NodePort out-of-range
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494
STEP: creating service nodeport-range-test with type NodePort in namespace services-2226
STEP: changing service nodeport-range-test to out-of-range NodePort 64053
STEP: deleting original service nodeport-range-test
STEP: creating service nodeport-range-test with out-of-range NodePort 64053
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:20:50.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2226" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":4,"skipped":606,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:23.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153
STEP: Performing setup for networking test in namespace nettest-5766
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 13 23:20:23.174: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:20:23.204: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:25.208: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:27.208: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:29.209: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:31.207: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:33.209: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:35.207: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:37.209: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:39.211: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:41.209: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:43.210: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:45.208: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 13 23:20:45.213: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 13 23:20:53.235: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 13 23:20:53.235: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:20:53.242: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:20:53.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5766" for this suite.


S [SKIPPING] [30.195 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:35.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for multiple endpoint-Services with same selector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289
STEP: Performing setup for networking test in namespace nettest-1020
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 13 23:20:35.685: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:20:35.715: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:37.719: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:39.720: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:41.720: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:43.721: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:45.720: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:47.720: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:49.719: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:51.719: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:53.720: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:55.720: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:57.719: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 13 23:20:57.725: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 13 23:21:01.746: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 13 23:21:01.746: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:21:01.753: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:21:01.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1020" for this suite.


S [SKIPPING] [26.187 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for multiple endpoint-Services with same selector [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:40.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: udp [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434
STEP: Performing setup for networking test in namespace nettest-5494
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 13 23:20:40.567: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:20:40.599: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:42.602: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:44.603: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:46.603: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:48.603: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:50.602: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:52.604: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:54.602: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:56.602: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:58.603: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:00.602: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 13 23:21:00.606: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 13 23:21:02.611: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 13 23:21:08.633: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 13 23:21:08.633: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:21:08.639: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:21:08.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5494" for this suite.


S [SKIPPING] [28.199 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: udp [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:50.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-eba6ff43-db06-44fc-8f04-e827a25f676e]
STEP: Verifying pods for RC slow-terminating-unready-pod
May 13 23:20:50.622: INFO: Pod name slow-terminating-unready-pod: Found 0 pods out of 1
May 13 23:20:55.626: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
May 13 23:20:55.636: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-wwlkw]: "NOW: 2022-05-13 23:20:55.633854522 +0000 UTC m=+2.515602358", 1 of 1 required successes so far
STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-6987.svc.cluster.local
May 13 23:20:55.636: INFO: Creating new exec pod
May 13 23:20:59.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6987 exec execpod-hhpzt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6987.svc.cluster.local:80/'
May 13 23:20:59.911: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6987.svc.cluster.local:80/\n"
May 13 23:20:59.911: INFO: stdout: "NOW: 2022-05-13 23:20:59.903274785 +0000 UTC m=+6.785022622"
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-6987 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
May 13 23:21:04.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6987 exec execpod-hhpzt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6987.svc.cluster.local:80/; test "$?" -ne "0"'
May 13 23:21:05.213: INFO: rc: 1
May 13 23:21:05.213: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: NOW: 2022-05-13 23:21:05.203258079 +0000 UTC m=+12.085005965, err error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6987 exec execpod-hhpzt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6987.svc.cluster.local:80/; test "$?" -ne "0":
Command stdout:
NOW: 2022-05-13 23:21:05.203258079 +0000 UTC m=+12.085005965
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6987.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
May 13 23:21:07.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6987 exec execpod-hhpzt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6987.svc.cluster.local:80/; test "$?" -ne "0"'
May 13 23:21:08.541: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6987.svc.cluster.local:80/\n+ test 7 -ne 0\n"
May 13 23:21:08.541: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
May 13 23:21:08.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6987 exec execpod-hhpzt -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6987.svc.cluster.local:80/'
May 13 23:21:08.877: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6987.svc.cluster.local:80/\n"
May 13 23:21:08.877: INFO: stdout: "NOW: 2022-05-13 23:21:08.867662895 +0000 UTC m=+15.749410732"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-6987
STEP: deleting service tolerate-unready in namespace services-6987
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:21:08.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6987" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:18.330 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":5,"skipped":892,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:43.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: http [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416
STEP: Performing setup for networking test in namespace nettest-149
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 13 23:20:43.388: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:20:43.423: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:45.427: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:47.428: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:49.427: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:51.428: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:53.429: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:55.427: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:57.428: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:59.427: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:01.428: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:03.430: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 13 23:21:03.434: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 13 23:21:05.441: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 13 23:21:11.461: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 13 23:21:11.461: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:21:11.468: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:21:11.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-149" for this suite.


S [SKIPPING] [28.200 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: http [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:53.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
STEP: Performing setup for networking test in namespace nettest-4046
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 13 23:20:53.410: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:20:53.442: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:55.445: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:57.447: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:59.445: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:01.445: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:03.446: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:05.446: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:07.446: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:09.445: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:11.446: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:13.447: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:15.445: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 13 23:21:15.450: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 13 23:21:23.471: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 13 23:21:23.471: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:21:23.478: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:21:23.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4046" for this suite.


S [SKIPPING] [30.195 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:21:23.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
May 13 23:21:23.670: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:21:23.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-2793" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have correct firewall rules for e2e cluster [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:07.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
STEP: creating service-disabled in namespace services-1780
STEP: creating service service-proxy-disabled in namespace services-1780
STEP: creating replication controller service-proxy-disabled in namespace services-1780
I0513 23:20:07.874609      28 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-1780, replica count: 3
I0513 23:20:10.925556      28 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0513 23:20:13.926353      28 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-1780
STEP: creating service service-proxy-toggled in namespace services-1780
STEP: creating replication controller service-proxy-toggled in namespace services-1780
I0513 23:20:13.939010      28 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-1780, replica count: 3
I0513 23:20:16.991690      28 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0513 23:20:19.992151      28 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0513 23:20:22.992963      28 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
May 13 23:20:22.995: INFO: Creating new host exec pod
May 13 23:20:23.007: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:25.010: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:27.011: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:29.011: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 13 23:20:29.011: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 13 23:20:35.026: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.60.193:80 2>&1 || true; echo; done" in pod services-1780/verify-service-up-host-exec-pod
May 13 23:20:35.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1780 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.60.193:80 2>&1 || true; echo; done'
May 13 23:20:35.419: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n"
May 13 23:20:35.420: INFO: stdout: "service-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\n"
May 13 23:20:35.420: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.60.193:80 2>&1 || true; echo; done" in pod services-1780/verify-service-up-exec-pod-dl7c8
May 13 23:20:35.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1780 exec verify-service-up-exec-pod-dl7c8 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.60.193:80 2>&1 || true; echo; done'
May 13 23:20:35.796: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n"
May 13 23:20:35.796: INFO: stdout: "service-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1780
STEP: Deleting pod verify-service-up-exec-pod-dl7c8 in namespace services-1780
STEP: verifying service-disabled is not up
May 13 23:20:35.808: INFO: Creating new host exec pod
May 13 23:20:35.819: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:37.823: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:39.823: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May 13 23:20:39.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1780 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.62.133:80 && echo service-down-failed'
May 13 23:20:42.221: INFO: rc: 28
May 13 23:20:42.221: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.62.133:80 && echo service-down-failed" in pod services-1780/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1780 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.62.133:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.62.133:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1780
STEP: adding service-proxy-name label
STEP: verifying service is not up
May 13 23:20:42.238: INFO: Creating new host exec pod
May 13 23:20:42.249: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:44.254: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:46.253: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:48.253: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:50.253: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:52.254: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:54.257: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:56.254: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:58.256: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:00.252: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:02.253: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May 13 23:21:02.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1780 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.60.193:80 && echo service-down-failed'
May 13 23:21:04.651: INFO: rc: 28
May 13 23:21:04.651: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.60.193:80 && echo service-down-failed" in pod services-1780/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1780 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.60.193:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.60.193:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1780
STEP: removing service-proxy-name annotation
STEP: verifying service is up
May 13 23:21:04.667: INFO: Creating new host exec pod
May 13 23:21:04.681: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:06.685: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:08.700: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 13 23:21:08.700: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 13 23:21:14.716: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.60.193:80 2>&1 || true; echo; done" in pod services-1780/verify-service-up-host-exec-pod
May 13 23:21:14.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1780 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.60.193:80 2>&1 || true; echo; done'
May 13 23:21:15.269: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n"
May 13 23:21:15.269: INFO: stdout: "service-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\n"
May 13 23:21:15.269: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.60.193:80 2>&1 || true; echo; done" in pod services-1780/verify-service-up-exec-pod-2zfvk
May 13 23:21:15.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1780 exec verify-service-up-exec-pod-2zfvk -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.60.193:80 2>&1 || true; echo; done'
May 13 23:21:15.724: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.193:80\n+ echo\n"
May 13 23:21:15.725: INFO: stdout: "service-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-57hg2\nservice-proxy-toggled-8b4db\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-4wh52\nservice-proxy-toggled-57hg2\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1780
STEP: Deleting pod verify-service-up-exec-pod-2zfvk in namespace services-1780
STEP: verifying service-disabled is still not up
May 13 23:21:15.739: INFO: Creating new host exec pod
May 13 23:21:15.753: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:17.758: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:19.757: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:21.756: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:23.758: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May 13 23:21:23.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1780 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.62.133:80 && echo service-down-failed'
May 13 23:21:26.034: INFO: rc: 28
May 13 23:21:26.034: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.62.133:80 && echo service-down-failed" in pod services-1780/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1780 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.62.133:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.62.133:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1780
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:21:26.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1780" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:78.209 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":2,"skipped":293,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:25.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
May 13 23:20:25.185: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:27.188: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:29.191: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:31.188: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:33.190: INFO: The status of Pod boom-server is Running (Ready = true)
STEP: Server pod created on node node2
STEP: Server service created
May 13 23:20:33.211: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:35.216: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:37.216: INFO: The status of Pod startup-script is Running (Ready = true)
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
May 13 23:21:37.274: INFO: boom-server pod logs: 2022/05/13 23:20:30 external ip: 10.244.4.46
2022/05/13 23:20:30 listen on 0.0.0.0:9000
2022/05/13 23:20:30 probing 10.244.4.46
2022/05/13 23:20:37 tcp packet: &{SrcPort:41510 DestPort:9000 Seq:2833301316 Ack:0 Flags:40962 WindowSize:29200 Checksum:28289 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:20:37 tcp packet: &{SrcPort:41510 DestPort:9000 Seq:2833301317 Ack:2977155264 Flags:32784 WindowSize:229 Checksum:22905 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:37 connection established
2022/05/13 23:20:37 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 162 38 177 114 66 32 168 224 191 69 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:20:37 checksumer: &{sum:449148 oddByte:33 length:39}
2022/05/13 23:20:37 ret:  449181
2022/05/13 23:20:37 ret:  55971
2022/05/13 23:20:37 ret:  55971
2022/05/13 23:20:37 boom packet injected
2022/05/13 23:20:37 tcp packet: &{SrcPort:41510 DestPort:9000 Seq:2833301317 Ack:2977155264 Flags:32785 WindowSize:229 Checksum:22903 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:39 tcp packet: &{SrcPort:44855 DestPort:9000 Seq:893965767 Ack:0 Flags:40962 WindowSize:29200 Checksum:46773 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:20:39 tcp packet: &{SrcPort:44855 DestPort:9000 Seq:893965768 Ack:564506863 Flags:32784 WindowSize:229 Checksum:16765 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:39 connection established
2022/05/13 23:20:39 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 175 55 33 164 42 79 53 72 213 200 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:20:39 checksumer: &{sum:472708 oddByte:33 length:39}
2022/05/13 23:20:39 ret:  472741
2022/05/13 23:20:39 ret:  13996
2022/05/13 23:20:39 ret:  13996
2022/05/13 23:20:39 boom packet injected
2022/05/13 23:20:39 tcp packet: &{SrcPort:44855 DestPort:9000 Seq:893965768 Ack:564506863 Flags:32785 WindowSize:229 Checksum:16764 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:41 tcp packet: &{SrcPort:38476 DestPort:9000 Seq:2469498054 Ack:0 Flags:40962 WindowSize:29200 Checksum:45800 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:20:41 tcp packet: &{SrcPort:38476 DestPort:9000 Seq:2469498055 Ack:2160559089 Flags:32784 WindowSize:229 Checksum:956 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:41 connection established
2022/05/13 23:20:41 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 150 76 128 197 253 81 147 49 140 199 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:20:41 checksumer: &{sum:481202 oddByte:33 length:39}
2022/05/13 23:20:41 ret:  481235
2022/05/13 23:20:41 ret:  22490
2022/05/13 23:20:41 ret:  22490
2022/05/13 23:20:41 boom packet injected
2022/05/13 23:20:41 tcp packet: &{SrcPort:38476 DestPort:9000 Seq:2469498055 Ack:2160559089 Flags:32785 WindowSize:229 Checksum:955 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:43 tcp packet: &{SrcPort:38936 DestPort:9000 Seq:3989334308 Ack:0 Flags:40962 WindowSize:29200 Checksum:29270 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:20:43 tcp packet: &{SrcPort:38936 DestPort:9000 Seq:3989334309 Ack:1469311052 Flags:32784 WindowSize:229 Checksum:32817 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:43 connection established
2022/05/13 23:20:43 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 152 24 87 146 97 172 237 200 105 37 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:20:43 checksumer: &{sum:475174 oddByte:33 length:39}
2022/05/13 23:20:43 ret:  475207
2022/05/13 23:20:43 ret:  16462
2022/05/13 23:20:43 ret:  16462
2022/05/13 23:20:43 boom packet injected
2022/05/13 23:20:43 tcp packet: &{SrcPort:38936 DestPort:9000 Seq:3989334309 Ack:1469311052 Flags:32785 WindowSize:229 Checksum:32816 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:45 tcp packet: &{SrcPort:44406 DestPort:9000 Seq:443780808 Ack:0 Flags:40962 WindowSize:29200 Checksum:730 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:20:45 tcp packet: &{SrcPort:44406 DestPort:9000 Seq:443780809 Ack:2974151760 Flags:32784 WindowSize:229 Checksum:41773 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:45 connection established
2022/05/13 23:20:45 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 173 118 177 68 109 176 26 115 142 201 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:20:45 checksumer: &{sum:500467 oddByte:33 length:39}
2022/05/13 23:20:45 ret:  500500
2022/05/13 23:20:45 ret:  41755
2022/05/13 23:20:45 ret:  41755
2022/05/13 23:20:45 boom packet injected
2022/05/13 23:20:45 tcp packet: &{SrcPort:44406 DestPort:9000 Seq:443780809 Ack:2974151760 Flags:32785 WindowSize:229 Checksum:41772 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:47 tcp packet: &{SrcPort:41510 DestPort:9000 Seq:2833301318 Ack:2977155265 Flags:32784 WindowSize:229 Checksum:2906 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:47 tcp packet: &{SrcPort:37295 DestPort:9000 Seq:466131243 Ack:0 Flags:40962 WindowSize:29200 Checksum:2840 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:20:47 tcp packet: &{SrcPort:37295 DestPort:9000 Seq:466131244 Ack:2740550315 Flags:32784 WindowSize:229 Checksum:11053 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:47 connection established
2022/05/13 23:20:47 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 145 175 163 87 244 11 27 200 153 44 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:20:47 checksumer: &{sum:459356 oddByte:33 length:39}
2022/05/13 23:20:47 ret:  459389
2022/05/13 23:20:47 ret:  644
2022/05/13 23:20:47 ret:  644
2022/05/13 23:20:47 boom packet injected
2022/05/13 23:20:47 tcp packet: &{SrcPort:37295 DestPort:9000 Seq:466131244 Ack:2740550315 Flags:32785 WindowSize:229 Checksum:11052 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:49 tcp packet: &{SrcPort:44855 DestPort:9000 Seq:893965769 Ack:564506864 Flags:32784 WindowSize:229 Checksum:62297 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:49 tcp packet: &{SrcPort:45403 DestPort:9000 Seq:2925213370 Ack:0 Flags:40962 WindowSize:29200 Checksum:45942 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:20:49 tcp packet: &{SrcPort:45403 DestPort:9000 Seq:2925213371 Ack:2821654964 Flags:32784 WindowSize:229 Checksum:14298 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:49 connection established
2022/05/13 23:20:49 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 177 91 168 45 131 20 174 91 54 187 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:20:49 checksumer: &{sum:438080 oddByte:33 length:39}
2022/05/13 23:20:49 ret:  438113
2022/05/13 23:20:49 ret:  44903
2022/05/13 23:20:49 ret:  44903
2022/05/13 23:20:49 boom packet injected
2022/05/13 23:20:49 tcp packet: &{SrcPort:45403 DestPort:9000 Seq:2925213371 Ack:2821654964 Flags:32785 WindowSize:229 Checksum:14296 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:51 tcp packet: &{SrcPort:38476 DestPort:9000 Seq:2469498056 Ack:2160559090 Flags:32784 WindowSize:229 Checksum:46487 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:51 tcp packet: &{SrcPort:45374 DestPort:9000 Seq:2736882135 Ack:0 Flags:40962 WindowSize:29200 Checksum:27615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:20:51 tcp packet: &{SrcPort:45374 DestPort:9000 Seq:2736882136 Ack:1437489098 Flags:32784 WindowSize:229 Checksum:60636 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:51 connection established
2022/05/13 23:20:51 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 177 62 85 172 209 42 163 33 129 216 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:20:51 checksumer: &{sum:461435 oddByte:33 length:39}
2022/05/13 23:20:51 ret:  461468
2022/05/13 23:20:51 ret:  2723
2022/05/13 23:20:51 ret:  2723
2022/05/13 23:20:51 boom packet injected
2022/05/13 23:20:51 tcp packet: &{SrcPort:45374 DestPort:9000 Seq:2736882136 Ack:1437489098 Flags:32785 WindowSize:229 Checksum:60635 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:53 tcp packet: &{SrcPort:38936 DestPort:9000 Seq:3989334310 Ack:1469311053 Flags:32784 WindowSize:229 Checksum:12814 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:53 tcp packet: &{SrcPort:41737 DestPort:9000 Seq:1790895357 Ack:0 Flags:40962 WindowSize:29200 Checksum:18304 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:20:53 tcp packet: &{SrcPort:41737 DestPort:9000 Seq:1790895358 Ack:2094544893 Flags:32784 WindowSize:229 Checksum:47439 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:53 connection established
2022/05/13 23:20:53 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 163 9 124 214 177 93 106 190 228 254 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:20:53 checksumer: &{sum:521630 oddByte:33 length:39}
2022/05/13 23:20:53 ret:  521663
2022/05/13 23:20:53 ret:  62918
2022/05/13 23:20:53 ret:  62918
2022/05/13 23:20:53 boom packet injected
2022/05/13 23:20:53 tcp packet: &{SrcPort:41737 DestPort:9000 Seq:1790895358 Ack:2094544893 Flags:32785 WindowSize:229 Checksum:47438 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:55 tcp packet: &{SrcPort:44406 DestPort:9000 Seq:443780810 Ack:2974151761 Flags:32784 WindowSize:229 Checksum:21770 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:55 tcp packet: &{SrcPort:42510 DestPort:9000 Seq:618652180 Ack:0 Flags:40962 WindowSize:29200 Checksum:34163 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:20:55 tcp packet: &{SrcPort:42510 DestPort:9000 Seq:618652181 Ack:2997747478 Flags:32784 WindowSize:229 Checksum:62083 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:55 connection established
2022/05/13 23:20:55 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 166 14 178 172 120 118 36 223 226 21 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:20:55 checksumer: &{sum:467286 oddByte:33 length:39}
2022/05/13 23:20:55 ret:  467319
2022/05/13 23:20:55 ret:  8574
2022/05/13 23:20:55 ret:  8574
2022/05/13 23:20:55 boom packet injected
2022/05/13 23:20:55 tcp packet: &{SrcPort:42510 DestPort:9000 Seq:618652181 Ack:2997747478 Flags:32785 WindowSize:229 Checksum:62082 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:57 tcp packet: &{SrcPort:37295 DestPort:9000 Seq:466131245 Ack:2740550316 Flags:32784 WindowSize:229 Checksum:56584 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:57 tcp packet: &{SrcPort:42371 DestPort:9000 Seq:937983146 Ack:0 Flags:40962 WindowSize:29200 Checksum:53390 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:20:57 tcp packet: &{SrcPort:42371 DestPort:9000 Seq:937983147 Ack:1305564572 Flags:32784 WindowSize:229 Checksum:17445 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:57 connection established
2022/05/13 23:20:57 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 165 131 77 207 206 252 55 232 124 171 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:20:57 checksumer: &{sum:581107 oddByte:33 length:39}
2022/05/13 23:20:57 ret:  581140
2022/05/13 23:20:57 ret:  56860
2022/05/13 23:20:57 ret:  56860
2022/05/13 23:20:57 boom packet injected
2022/05/13 23:20:57 tcp packet: &{SrcPort:42371 DestPort:9000 Seq:937983147 Ack:1305564572 Flags:32785 WindowSize:229 Checksum:17444 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:59 tcp packet: &{SrcPort:45403 DestPort:9000 Seq:2925213372 Ack:2821654965 Flags:32784 WindowSize:229 Checksum:59829 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:59 tcp packet: &{SrcPort:38033 DestPort:9000 Seq:1728722000 Ack:0 Flags:40962 WindowSize:29200 Checksum:62184 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:20:59 tcp packet: &{SrcPort:38033 DestPort:9000 Seq:1728722001 Ack:2022745600 Flags:32784 WindowSize:229 Checksum:58250 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:20:59 connection established
2022/05/13 23:20:59 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 148 145 120 143 31 96 103 10 52 81 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:20:59 checksumer: &{sum:448326 oddByte:33 length:39}
2022/05/13 23:20:59 ret:  448359
2022/05/13 23:20:59 ret:  55149
2022/05/13 23:20:59 ret:  55149
2022/05/13 23:20:59 boom packet injected
2022/05/13 23:20:59 tcp packet: &{SrcPort:38033 DestPort:9000 Seq:1728722001 Ack:2022745600 Flags:32785 WindowSize:229 Checksum:58249 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:01 tcp packet: &{SrcPort:45374 DestPort:9000 Seq:2736882137 Ack:1437489099 Flags:32784 WindowSize:229 Checksum:40634 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:01 tcp packet: &{SrcPort:34571 DestPort:9000 Seq:3254332073 Ack:0 Flags:40962 WindowSize:29200 Checksum:42836 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:01 tcp packet: &{SrcPort:34571 DestPort:9000 Seq:3254332074 Ack:1449973198 Flags:32784 WindowSize:229 Checksum:33403 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:01 connection established
2022/05/13 23:21:01 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 135 11 86 107 79 46 193 249 42 170 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:01 checksumer: &{sum:476055 oddByte:33 length:39}
2022/05/13 23:21:01 ret:  476088
2022/05/13 23:21:01 ret:  17343
2022/05/13 23:21:01 ret:  17343
2022/05/13 23:21:01 boom packet injected
2022/05/13 23:21:01 tcp packet: &{SrcPort:34571 DestPort:9000 Seq:3254332074 Ack:1449973198 Flags:32785 WindowSize:229 Checksum:33402 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:03 tcp packet: &{SrcPort:41737 DestPort:9000 Seq:1790895359 Ack:2094544894 Flags:32784 WindowSize:229 Checksum:27436 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:03 tcp packet: &{SrcPort:34714 DestPort:9000 Seq:900810376 Ack:0 Flags:40962 WindowSize:29200 Checksum:3934 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:03 tcp packet: &{SrcPort:34714 DestPort:9000 Seq:900810377 Ack:2418900440 Flags:32784 WindowSize:229 Checksum:233 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:03 connection established
2022/05/13 23:21:03 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 135 154 144 43 247 56 53 177 70 137 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:03 checksumer: &{sum:472073 oddByte:33 length:39}
2022/05/13 23:21:03 ret:  472106
2022/05/13 23:21:03 ret:  13361
2022/05/13 23:21:03 ret:  13361
2022/05/13 23:21:03 boom packet injected
2022/05/13 23:21:03 tcp packet: &{SrcPort:34714 DestPort:9000 Seq:900810377 Ack:2418900440 Flags:32785 WindowSize:229 Checksum:232 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:05 tcp packet: &{SrcPort:42510 DestPort:9000 Seq:618652182 Ack:2997747479 Flags:32784 WindowSize:229 Checksum:42080 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:05 tcp packet: &{SrcPort:36602 DestPort:9000 Seq:1601395359 Ack:0 Flags:40962 WindowSize:29200 Checksum:49748 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:05 tcp packet: &{SrcPort:36602 DestPort:9000 Seq:1601395360 Ack:3270791240 Flags:32784 WindowSize:229 Checksum:42712 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:05 connection established
2022/05/13 23:21:05 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 142 250 194 242 201 168 95 115 90 160 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:05 checksumer: &{sum:566354 oddByte:33 length:39}
2022/05/13 23:21:05 ret:  566387
2022/05/13 23:21:05 ret:  42107
2022/05/13 23:21:05 ret:  42107
2022/05/13 23:21:05 boom packet injected
2022/05/13 23:21:05 tcp packet: &{SrcPort:36602 DestPort:9000 Seq:1601395360 Ack:3270791240 Flags:32785 WindowSize:229 Checksum:42711 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:07 tcp packet: &{SrcPort:42371 DestPort:9000 Seq:937983148 Ack:1305564573 Flags:32784 WindowSize:229 Checksum:62976 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:07 tcp packet: &{SrcPort:44136 DestPort:9000 Seq:421731061 Ack:0 Flags:40962 WindowSize:29200 Checksum:8976 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:07 tcp packet: &{SrcPort:44136 DestPort:9000 Seq:421731062 Ack:4128831410 Flags:32784 WindowSize:229 Checksum:9524 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:07 connection established
2022/05/13 23:21:07 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 172 104 246 23 113 18 25 35 26 246 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:07 checksumer: &{sum:435910 oddByte:33 length:39}
2022/05/13 23:21:07 ret:  435943
2022/05/13 23:21:07 ret:  42733
2022/05/13 23:21:07 ret:  42733
2022/05/13 23:21:07 boom packet injected
2022/05/13 23:21:07 tcp packet: &{SrcPort:44136 DestPort:9000 Seq:421731062 Ack:4128831410 Flags:32785 WindowSize:229 Checksum:9523 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:09 tcp packet: &{SrcPort:38033 DestPort:9000 Seq:1728722002 Ack:2022745601 Flags:32784 WindowSize:229 Checksum:38247 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:09 tcp packet: &{SrcPort:41645 DestPort:9000 Seq:3094326552 Ack:0 Flags:40962 WindowSize:29200 Checksum:62346 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:09 tcp packet: &{SrcPort:41645 DestPort:9000 Seq:3094326553 Ack:1423872109 Flags:32784 WindowSize:229 Checksum:63069 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:09 connection established
2022/05/13 23:21:09 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 162 173 84 221 9 205 184 111 173 25 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:09 checksumer: &{sum:515044 oddByte:33 length:39}
2022/05/13 23:21:09 ret:  515077
2022/05/13 23:21:09 ret:  56332
2022/05/13 23:21:09 ret:  56332
2022/05/13 23:21:09 boom packet injected
2022/05/13 23:21:09 tcp packet: &{SrcPort:41645 DestPort:9000 Seq:3094326553 Ack:1423872109 Flags:32785 WindowSize:229 Checksum:63068 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:11 tcp packet: &{SrcPort:34571 DestPort:9000 Seq:3254332075 Ack:1449973199 Flags:32784 WindowSize:229 Checksum:13401 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:11 tcp packet: &{SrcPort:36711 DestPort:9000 Seq:2013715869 Ack:0 Flags:40962 WindowSize:29200 Checksum:3812 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:11 tcp packet: &{SrcPort:36711 DestPort:9000 Seq:2013715870 Ack:2734314681 Flags:32784 WindowSize:229 Checksum:63359 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:11 connection established
2022/05/13 23:21:11 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 143 103 162 248 206 25 120 6 221 158 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:11 checksumer: &{sum:465364 oddByte:33 length:39}
2022/05/13 23:21:11 ret:  465397
2022/05/13 23:21:11 ret:  6652
2022/05/13 23:21:11 ret:  6652
2022/05/13 23:21:11 boom packet injected
2022/05/13 23:21:11 tcp packet: &{SrcPort:36711 DestPort:9000 Seq:2013715870 Ack:2734314681 Flags:32785 WindowSize:229 Checksum:63358 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:13 tcp packet: &{SrcPort:34714 DestPort:9000 Seq:900810378 Ack:2418900441 Flags:32784 WindowSize:229 Checksum:45766 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:13 tcp packet: &{SrcPort:45553 DestPort:9000 Seq:3023058155 Ack:0 Flags:40962 WindowSize:29200 Checksum:20754 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:13 tcp packet: &{SrcPort:45553 DestPort:9000 Seq:3023058156 Ack:2197602287 Flags:32784 WindowSize:229 Checksum:59044 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:13 connection established
2022/05/13 23:21:13 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 177 241 130 251 57 79 180 48 52 236 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:13 checksumer: &{sum:545748 oddByte:33 length:39}
2022/05/13 23:21:13 ret:  545781
2022/05/13 23:21:13 ret:  21501
2022/05/13 23:21:13 ret:  21501
2022/05/13 23:21:13 boom packet injected
2022/05/13 23:21:13 tcp packet: &{SrcPort:45553 DestPort:9000 Seq:3023058156 Ack:2197602287 Flags:32785 WindowSize:229 Checksum:59043 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:15 tcp packet: &{SrcPort:36602 DestPort:9000 Seq:1601395361 Ack:3270791241 Flags:32784 WindowSize:229 Checksum:22709 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:15 tcp packet: &{SrcPort:37707 DestPort:9000 Seq:2078450800 Ack:0 Flags:40962 WindowSize:29200 Checksum:12464 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:15 tcp packet: &{SrcPort:37707 DestPort:9000 Seq:2078450801 Ack:1456698242 Flags:32784 WindowSize:229 Checksum:14088 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:15 connection established
2022/05/13 23:21:15 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 147 75 86 209 236 226 123 226 164 113 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:15 checksumer: &{sum:544372 oddByte:33 length:39}
2022/05/13 23:21:15 ret:  544405
2022/05/13 23:21:15 ret:  20125
2022/05/13 23:21:15 ret:  20125
2022/05/13 23:21:15 boom packet injected
2022/05/13 23:21:15 tcp packet: &{SrcPort:37707 DestPort:9000 Seq:2078450801 Ack:1456698242 Flags:32785 WindowSize:229 Checksum:14087 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:17 tcp packet: &{SrcPort:44136 DestPort:9000 Seq:421731063 Ack:4128831411 Flags:32784 WindowSize:229 Checksum:55057 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:17 tcp packet: &{SrcPort:44307 DestPort:9000 Seq:4048689072 Ack:0 Flags:40962 WindowSize:29200 Checksum:9832 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:17 tcp packet: &{SrcPort:44307 DestPort:9000 Seq:4048689073 Ack:2328597665 Flags:32784 WindowSize:229 Checksum:53208 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:17 connection established
2022/05/13 23:21:17 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 173 19 138 202 14 1 241 82 23 177 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:17 checksumer: &{sum:449997 oddByte:33 length:39}
2022/05/13 23:21:17 ret:  450030
2022/05/13 23:21:17 ret:  56820
2022/05/13 23:21:17 ret:  56820
2022/05/13 23:21:17 boom packet injected
2022/05/13 23:21:17 tcp packet: &{SrcPort:44307 DestPort:9000 Seq:4048689073 Ack:2328597665 Flags:32785 WindowSize:229 Checksum:53207 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:19 tcp packet: &{SrcPort:41645 DestPort:9000 Seq:3094326554 Ack:1423872110 Flags:32784 WindowSize:229 Checksum:43066 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:19 tcp packet: &{SrcPort:38455 DestPort:9000 Seq:856682340 Ack:0 Flags:40962 WindowSize:29200 Checksum:7169 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:19 tcp packet: &{SrcPort:38455 DestPort:9000 Seq:856682341 Ack:1086018200 Flags:32784 WindowSize:229 Checksum:18873 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:19 connection established
2022/05/13 23:21:19 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 150 55 64 185 203 248 51 15 239 101 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:19 checksumer: &{sum:481603 oddByte:33 length:39}
2022/05/13 23:21:19 ret:  481636
2022/05/13 23:21:19 ret:  22891
2022/05/13 23:21:19 ret:  22891
2022/05/13 23:21:19 boom packet injected
2022/05/13 23:21:19 tcp packet: &{SrcPort:38455 DestPort:9000 Seq:856682341 Ack:1086018200 Flags:32785 WindowSize:229 Checksum:18872 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:21 tcp packet: &{SrcPort:36711 DestPort:9000 Seq:2013715871 Ack:2734314682 Flags:32784 WindowSize:229 Checksum:43356 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:21 tcp packet: &{SrcPort:45177 DestPort:9000 Seq:1448050947 Ack:0 Flags:40962 WindowSize:29200 Checksum:17680 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:21 tcp packet: &{SrcPort:45177 DestPort:9000 Seq:1448050948 Ack:3169583711 Flags:32784 WindowSize:229 Checksum:16128 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:21 connection established
2022/05/13 23:21:21 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 176 121 188 234 123 191 86 79 129 4 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:21 checksumer: &{sum:487998 oddByte:33 length:39}
2022/05/13 23:21:21 ret:  488031
2022/05/13 23:21:21 ret:  29286
2022/05/13 23:21:21 ret:  29286
2022/05/13 23:21:21 boom packet injected
2022/05/13 23:21:21 tcp packet: &{SrcPort:45177 DestPort:9000 Seq:1448050948 Ack:3169583711 Flags:32785 WindowSize:229 Checksum:16127 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:23 tcp packet: &{SrcPort:45553 DestPort:9000 Seq:3023058157 Ack:2197602288 Flags:32784 WindowSize:229 Checksum:39042 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:23 tcp packet: &{SrcPort:42973 DestPort:9000 Seq:3473507181 Ack:0 Flags:40962 WindowSize:29200 Checksum:51894 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:23 tcp packet: &{SrcPort:42973 DestPort:9000 Seq:3473507182 Ack:187062263 Flags:32784 WindowSize:229 Checksum:6404 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:23 connection established
2022/05/13 23:21:23 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 167 221 11 36 209 87 207 9 131 110 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:23 checksumer: &{sum:445525 oddByte:33 length:39}
2022/05/13 23:21:23 ret:  445558
2022/05/13 23:21:23 ret:  52348
2022/05/13 23:21:23 ret:  52348
2022/05/13 23:21:23 boom packet injected
2022/05/13 23:21:23 tcp packet: &{SrcPort:42973 DestPort:9000 Seq:3473507182 Ack:187062263 Flags:32785 WindowSize:229 Checksum:6403 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:25 tcp packet: &{SrcPort:37707 DestPort:9000 Seq:2078450802 Ack:1456698243 Flags:32784 WindowSize:229 Checksum:59621 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:25 tcp packet: &{SrcPort:46855 DestPort:9000 Seq:3997689298 Ack:0 Flags:40962 WindowSize:29200 Checksum:12825 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:25 tcp packet: &{SrcPort:46855 DestPort:9000 Seq:3997689299 Ack:2865347446 Flags:32784 WindowSize:229 Checksum:30067 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:25 connection established
2022/05/13 23:21:25 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 183 7 170 200 52 214 238 71 229 211 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:25 checksumer: &{sum:507112 oddByte:33 length:39}
2022/05/13 23:21:25 ret:  507145
2022/05/13 23:21:25 ret:  48400
2022/05/13 23:21:25 ret:  48400
2022/05/13 23:21:25 boom packet injected
2022/05/13 23:21:25 tcp packet: &{SrcPort:46855 DestPort:9000 Seq:3997689299 Ack:2865347446 Flags:32785 WindowSize:229 Checksum:30066 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:27 tcp packet: &{SrcPort:44307 DestPort:9000 Seq:4048689074 Ack:2328597666 Flags:32784 WindowSize:229 Checksum:33204 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:27 tcp packet: &{SrcPort:44690 DestPort:9000 Seq:2131308648 Ack:0 Flags:40962 WindowSize:29200 Checksum:22374 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:27 tcp packet: &{SrcPort:44690 DestPort:9000 Seq:2131308649 Ack:270184504 Flags:32784 WindowSize:229 Checksum:14557 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:27 connection established
2022/05/13 23:21:27 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 174 146 16 25 41 152 127 9 48 105 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:27 checksumer: &{sum:438550 oddByte:33 length:39}
2022/05/13 23:21:27 ret:  438583
2022/05/13 23:21:27 ret:  45373
2022/05/13 23:21:27 ret:  45373
2022/05/13 23:21:27 boom packet injected
2022/05/13 23:21:27 tcp packet: &{SrcPort:44690 DestPort:9000 Seq:2131308649 Ack:270184504 Flags:32785 WindowSize:229 Checksum:14556 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:29 tcp packet: &{SrcPort:38455 DestPort:9000 Seq:856682342 Ack:1086018201 Flags:32784 WindowSize:229 Checksum:64406 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:29 tcp packet: &{SrcPort:35296 DestPort:9000 Seq:1037007297 Ack:0 Flags:40962 WindowSize:29200 Checksum:27687 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:29 tcp packet: &{SrcPort:35296 DestPort:9000 Seq:1037007298 Ack:3383172010 Flags:32784 WindowSize:229 Checksum:8387 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:29 connection established
2022/05/13 23:21:29 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 137 224 201 165 149 10 61 207 121 194 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:29 checksumer: &{sum:531741 oddByte:33 length:39}
2022/05/13 23:21:29 ret:  531774
2022/05/13 23:21:29 ret:  7494
2022/05/13 23:21:29 ret:  7494
2022/05/13 23:21:29 boom packet injected
2022/05/13 23:21:29 tcp packet: &{SrcPort:35296 DestPort:9000 Seq:1037007298 Ack:3383172010 Flags:32785 WindowSize:229 Checksum:8386 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:31 tcp packet: &{SrcPort:45177 DestPort:9000 Seq:1448050949 Ack:3169583712 Flags:32784 WindowSize:229 Checksum:61660 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:31 tcp packet: &{SrcPort:44287 DestPort:9000 Seq:4277498663 Ack:0 Flags:40962 WindowSize:29200 Checksum:34474 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:31 tcp packet: &{SrcPort:44287 DestPort:9000 Seq:4277498664 Ack:3757894403 Flags:32784 WindowSize:229 Checksum:19913 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:31 connection established
2022/05/13 23:21:31 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 172 255 223 251 100 99 254 245 115 40 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:31 checksumer: &{sum:554976 oddByte:33 length:39}
2022/05/13 23:21:31 ret:  555009
2022/05/13 23:21:31 ret:  30729
2022/05/13 23:21:31 ret:  30729
2022/05/13 23:21:31 boom packet injected
2022/05/13 23:21:31 tcp packet: &{SrcPort:44287 DestPort:9000 Seq:4277498664 Ack:3757894403 Flags:32785 WindowSize:229 Checksum:19912 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:33 tcp packet: &{SrcPort:42973 DestPort:9000 Seq:3473507183 Ack:187062264 Flags:32784 WindowSize:229 Checksum:51937 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:33 tcp packet: &{SrcPort:37880 DestPort:9000 Seq:767490013 Ack:0 Flags:40962 WindowSize:29200 Checksum:58466 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:33 tcp packet: &{SrcPort:37880 DestPort:9000 Seq:767490014 Ack:125878067 Flags:32784 WindowSize:229 Checksum:43011 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:33 connection established
2022/05/13 23:21:33 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 147 248 7 127 56 147 45 190 247 222 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:33 checksumer: &{sum:565878 oddByte:33 length:39}
2022/05/13 23:21:33 ret:  565911
2022/05/13 23:21:33 ret:  41631
2022/05/13 23:21:33 ret:  41631
2022/05/13 23:21:33 boom packet injected
2022/05/13 23:21:33 tcp packet: &{SrcPort:37880 DestPort:9000 Seq:767490014 Ack:125878067 Flags:32785 WindowSize:229 Checksum:43010 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:35 tcp packet: &{SrcPort:46855 DestPort:9000 Seq:3997689300 Ack:2865347447 Flags:32784 WindowSize:229 Checksum:10063 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:35 tcp packet: &{SrcPort:45526 DestPort:9000 Seq:4103152676 Ack:0 Flags:40962 WindowSize:29200 Checksum:52122 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.160
2022/05/13 23:21:35 tcp packet: &{SrcPort:45526 DestPort:9000 Seq:4103152677 Ack:2667337829 Flags:32785 WindowSize:229 Checksum:22204 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.160
2022/05/13 23:21:35 connection established
2022/05/13 23:21:35 calling checksumTCP: 10.244.4.46 10.244.3.160 [35 40 177 214 158 250 209 197 244 145 36 37 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/13 23:21:35 checksumer: &{sum:542904 oddByte:33 length:39}
2022/05/13 23:21:35 ret:  542937
2022/05/13 23:21:35 ret:  18657
2022/05/13 23:21:35 ret:  18657
2022/05/13 23:21:35 boom packet injected
2022/05/13 23:21:35 tcp packet: &{SrcPort:45526 DestPort:9000 Seq:4103152677 Ack:2667337829 Flags:32784 WindowSize:229 Checksum:22205 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.160

May 13 23:21:37.274: INFO: boom-server OK: did not receive any RST packet
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:21:37.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-9477" for this suite.


• [SLOW TEST:72.142 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":1,"skipped":252,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:21:09.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
STEP: Performing setup for networking test in namespace nettest-5295
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 13 23:21:09.810: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:21:09.840: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:11.845: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:13.846: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:15.845: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:17.846: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:19.844: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:21.844: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:23.845: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:25.846: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:27.847: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:29.843: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:31.844: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 13 23:21:31.848: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 13 23:21:37.896: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 13 23:21:37.897: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:21:37.903: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:21:37.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5295" for this suite.


S [SKIPPING] [28.252 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should check kube-proxy urls [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138

  Requires at least 2 nodes (not -1)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:21:37.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91
May 13 23:21:37.998: INFO: (0) /api/v1/nodes/node2/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename ingress
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69
May 13 23:21:38.018: INFO: Found ClusterRoles; assuming RBAC is enabled.
[BeforeEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688
May 13 23:21:38.123: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706
STEP: No ingress created, no cleanup necessary
[AfterEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:21:38.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-1348" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.143 seconds]
[sig-network] Loadbalancing: L7
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685
    should conform to Ingress spec [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722

    Only supported for providers [gce gke] (not local)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:21:11.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212
STEP: Performing setup for networking test in namespace nettest-5640
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 13 23:21:11.861: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:21:11.894: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:13.897: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:15.898: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:17.899: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:19.898: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:21.898: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:23.900: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:25.900: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:27.898: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:29.897: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:31.900: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:33.898: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 13 23:21:33.902: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 13 23:21:41.947: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 13 23:21:41.947: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:21:41.953: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:21:41.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5640" for this suite.


S [SKIPPING] [30.232 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:21:01.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename network-perf
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
May 13 23:21:01.933: INFO: deploying iperf2 server
May 13 23:21:01.937: INFO: Waiting for deployment "iperf2-server-deployment" to complete
May 13 23:21:01.939: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
May 13 23:21:03.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788080861, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788080861, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788080861, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788080861, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 13 23:21:05.953: INFO: waiting for iperf2 server endpoints
May 13 23:21:07.959: INFO: found iperf2 server endpoints
May 13 23:21:07.959: INFO: waiting for client pods to be running
May 13 23:21:11.963: INFO: all client pods are ready: 2 pods
May 13 23:21:11.966: INFO: server pod phase Running
May 13 23:21:11.966: INFO: server pod condition 0: {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-13 23:21:01 +0000 UTC Reason: Message:}
May 13 23:21:11.966: INFO: server pod condition 1: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-13 23:21:05 +0000 UTC Reason: Message:}
May 13 23:21:11.966: INFO: server pod condition 2: {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-13 23:21:05 +0000 UTC Reason: Message:}
May 13 23:21:11.966: INFO: server pod condition 3: {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-13 23:21:01 +0000 UTC Reason: Message:}
May 13 23:21:11.966: INFO: server pod container status 0: {Name:iperf2-server State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2022-05-13 23:21:04 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://5d258d0b0aa7fc8abca216f56e4509e97b7802616e1204f6adfa87401ffdbfd5 Started:0xc003e4355b}
May 13 23:21:11.966: INFO: found 2 matching client pods
May 13 23:21:11.969: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-9034 PodName:iperf2-clients-947tv ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:11.969: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:12.053: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
May 13 23:21:12.053: INFO: iperf version: 
May 13 23:21:12.053: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-947tv (node node2)
May 13 23:21:12.055: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-9034 PodName:iperf2-clients-947tv ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:12.055: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:27.289: INFO: Exec stderr: ""
May 13 23:21:27.289: INFO: output from exec on client pod iperf2-clients-947tv (node node2): 
20220513232113.158,10.244.4.63,45750,10.233.49.86,6789,3,0.0-1.0,3441426432,27531411456
20220513232114.145,10.244.4.63,45750,10.233.49.86,6789,3,1.0-2.0,3402891264,27223130112
20220513232115.151,10.244.4.63,45750,10.233.49.86,6789,3,2.0-3.0,3425042432,27400339456
20220513232116.158,10.244.4.63,45750,10.233.49.86,6789,3,3.0-4.0,3481272320,27850178560
20220513232117.145,10.244.4.63,45750,10.233.49.86,6789,3,4.0-5.0,3336699904,26693599232
20220513232118.152,10.244.4.63,45750,10.233.49.86,6789,3,5.0-6.0,3519152128,28153217024
20220513232119.159,10.244.4.63,45750,10.233.49.86,6789,3,6.0-7.0,3462397952,27699183616
20220513232120.146,10.244.4.63,45750,10.233.49.86,6789,3,7.0-8.0,3470786560,27766292480
20220513232121.152,10.244.4.63,45750,10.233.49.86,6789,3,8.0-9.0,3425304576,27402436608
20220513232122.160,10.244.4.63,45750,10.233.49.86,6789,3,9.0-10.0,3451518976,27612151808
20220513232122.160,10.244.4.63,45750,10.233.49.86,6789,3,0.0-10.0,34416492544,27533103175

May 13 23:21:27.292: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-9034 PodName:iperf2-clients-fsvr5 ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:27.292: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:27.406: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
May 13 23:21:27.406: INFO: iperf version: 
May 13 23:21:27.406: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-fsvr5 (node node1)
May 13 23:21:27.410: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-9034 PodName:iperf2-clients-fsvr5 ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:27.410: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:42.582: INFO: Exec stderr: ""
May 13 23:21:42.582: INFO: output from exec on client pod iperf2-clients-fsvr5 (node node1): 
20220513232128.517,10.244.3.172,56962,10.233.49.86,6789,3,0.0-1.0,67895296,543162368
20220513232129.533,10.244.3.172,56962,10.233.49.86,6789,3,1.0-2.0,118882304,951058432
20220513232130.521,10.244.3.172,56962,10.233.49.86,6789,3,2.0-3.0,117440512,939524096
20220513232131.528,10.244.3.172,56962,10.233.49.86,6789,3,3.0-4.0,107872256,862978048
20220513232132.601,10.244.3.172,56962,10.233.49.86,6789,3,4.0-5.0,83099648,664797184
20220513232133.524,10.244.3.172,56962,10.233.49.86,6789,3,5.0-6.0,110493696,883949568
20220513232134.524,10.244.3.172,56962,10.233.49.86,6789,3,6.0-7.0,77332480,618659840
20220513232135.534,10.244.3.172,56962,10.233.49.86,6789,3,7.0-8.0,118226944,945815552
20220513232136.522,10.244.3.172,56962,10.233.49.86,6789,3,8.0-9.0,116260864,930086912
20220513232137.547,10.244.3.172,56962,10.233.49.86,6789,3,9.0-10.0,118226944,945815552
20220513232137.547,10.244.3.172,56962,10.233.49.86,6789,3,0.0-10.0,1035730944,828102551

May 13 23:21:42.582: INFO:                                From                                 To    Bandwidth (MB/s)
May 13 23:21:42.582: INFO:                               node2                              node2                3282
May 13 23:21:42.582: INFO:                               node1                              node2                  99
[AfterEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:21:42.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "network-perf-9034" for this suite.


• [SLOW TEST:40.686 seconds]
[sig-network] Networking IPerf2 [Feature:Networking-Performance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
------------------------------
{"msg":"PASSED [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2","total":-1,"completed":2,"skipped":566,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:19:57.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256
STEP: Performing setup for networking test in namespace nettest-1488
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 13 23:19:57.194: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:19:57.227: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:19:59.230: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:01.230: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:03.230: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:05.230: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:20:07.231: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:09.236: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:11.230: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:13.237: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:15.232: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:17.233: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:20:19.235: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 13 23:20:19.240: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 13 23:20:23.262: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 13 23:20:23.262: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
May 13 23:20:23.283: INFO: Service node-port-service in namespace nettest-1488 found.
May 13 23:20:23.296: INFO: Service session-affinity-service in namespace nettest-1488 found.
STEP: Waiting for NodePort service to expose endpoint
May 13 23:20:24.300: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
May 13 23:20:25.305: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) netserver-0 (endpoint) --> 10.233.49.153:90 (config.clusterIP)
May 13 23:20:25.311: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.233.49.153&port=90&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:25.311: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:25.399: INFO: Waiting for responses: map[netserver-1:{}]
May 13 23:20:27.408: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.233.49.153&port=90&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:27.408: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:27.502: INFO: Waiting for responses: map[]
May 13 23:20:27.502: INFO: reached 10.233.49.153 after 1/34 tries
STEP: dialing(udp) netserver-0 (endpoint) --> 10.10.190.207:30729 (nodeIP)
May 13 23:20:27.504: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:27.504: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:27.605: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:20:29.608: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:29.608: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:29.755: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:20:31.759: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:31.759: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:32.009: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:20:34.015: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:34.015: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:34.115: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:20:36.119: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:36.119: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:36.383: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:20:38.388: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:38.388: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:38.496: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:20:40.504: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:40.504: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:40.675: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:20:42.679: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:42.679: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:43.181: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:20:45.184: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:45.184: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:45.360: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:20:47.364: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:47.364: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:47.930: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:20:49.932: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:49.932: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:50.032: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:20:52.036: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:52.036: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:52.480: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:20:54.484: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:54.484: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:55.045: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:20:57.050: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:57.050: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:57.306: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:20:59.313: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:20:59.313: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:20:59.404: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:01.408: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:01.408: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:01.494: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:03.497: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:03.497: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:03.593: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:05.596: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:05.596: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:05.687: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:07.692: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:07.692: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:07.888: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:09.895: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:09.895: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:09.984: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:11.989: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:11.989: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:12.115: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:14.120: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:14.120: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:14.239: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:16.243: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:16.243: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:16.370: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:18.373: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:18.373: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:18.458: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:20.463: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:20.463: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:20.554: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:22.559: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:22.559: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:22.644: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:24.647: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:24.647: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:24.864: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:26.868: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:26.868: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:26.970: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:28.976: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:28.976: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:29.107: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:31.110: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:31.110: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:31.671: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:33.674: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:33.674: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:34.038: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:36.041: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:36.042: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:36.134: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:38.138: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:38.138: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:38.227: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:40.230: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'] Namespace:nettest-1488 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:21:40.230: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:21:40.334: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
May 13 23:21:42.335: INFO: 
Output of kubectl describe pod nettest-1488/netserver-0:

May 13 23:21:42.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-1488 describe pod netserver-0 --namespace=nettest-1488'
May 13 23:21:42.525: INFO: stderr: ""
May 13 23:21:42.525: INFO: stdout: "Name:         netserver-0\nNamespace:    nettest-1488\nPriority:     0\nNode:         node1/10.10.190.207\nStart Time:   Fri, 13 May 2022 23:19:57 +0000\nLabels:       selector-758ee24d-451c-4a9f-95b7-0aa0f5e4e056=true\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.146\"\n                    ],\n                    \"mac\": \"9a:e9:3f:9e:21:9b\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.146\"\n                    ],\n                    \"mac\": \"9a:e9:3f:9e:21:9b\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.3.146\nIPs:\n  IP:  10.244.3.146\nContainers:\n  webserver:\n    Container ID:  docker://bebe0089935601880c55082779d393b83883e54dae3bef2dcfec357fb41e9bfc\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Fri, 13 May 2022 23:20:03 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qz46k (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-qz46k:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node1\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  105s  default-scheduler  Successfully assigned nettest-1488/netserver-0 to node1\n  Normal  Pulling    101s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     100s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 612.607912ms\n  Normal  Created    99s   kubelet            Created container webserver\n  Normal  Started    99s   kubelet            Started container webserver\n"
May 13 23:21:42.525: INFO: Name:         netserver-0
Namespace:    nettest-1488
Priority:     0
Node:         node1/10.10.190.207
Start Time:   Fri, 13 May 2022 23:19:57 +0000
Labels:       selector-758ee24d-451c-4a9f-95b7-0aa0f5e4e056=true
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.3.146"
                    ],
                    "mac": "9a:e9:3f:9e:21:9b",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.3.146"
                    ],
                    "mac": "9a:e9:3f:9e:21:9b",
                    "default": true,
                    "dns": {}
                }]
              kubernetes.io/psp: collectd
Status:       Running
IP:           10.244.3.146
IPs:
  IP:  10.244.3.146
Containers:
  webserver:
    Container ID:  docker://bebe0089935601880c55082779d393b83883e54dae3bef2dcfec357fb41e9bfc
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    0/TCP, 0/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
    State:          Running
      Started:      Fri, 13 May 2022 23:20:03 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qz46k (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-qz46k:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node1
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  105s  default-scheduler  Successfully assigned nettest-1488/netserver-0 to node1
  Normal  Pulling    101s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     100s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 612.607912ms
  Normal  Created    99s   kubelet            Created container webserver
  Normal  Started    99s   kubelet            Started container webserver

May 13 23:21:42.525: INFO: 
Output of kubectl describe pod nettest-1488/netserver-1:

May 13 23:21:42.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-1488 describe pod netserver-1 --namespace=nettest-1488'
May 13 23:21:42.710: INFO: stderr: ""
May 13 23:21:42.710: INFO: stdout: "Name:         netserver-1\nNamespace:    nettest-1488\nPriority:     0\nNode:         node2/10.10.190.208\nStart Time:   Fri, 13 May 2022 23:19:57 +0000\nLabels:       selector-758ee24d-451c-4a9f-95b7-0aa0f5e4e056=true\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.29\"\n                    ],\n                    \"mac\": \"16:25:eb:cd:62:ec\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.29\"\n                    ],\n                    \"mac\": \"16:25:eb:cd:62:ec\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.4.29\nIPs:\n  IP:  10.244.4.29\nContainers:\n  webserver:\n    Container ID:  docker://682e80c8520ba36379866b4729906c3a90fbb80f9666ce630c7685a13cdeff20\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Fri, 13 May 2022 23:20:02 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9x5lf (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-9x5lf:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node2\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  105s  default-scheduler  Successfully assigned nettest-1488/netserver-1 to node2\n  Normal  Pulling    101s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     100s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 487.838589ms\n  Normal  Created    100s  kubelet            Created container webserver\n  Normal  Started    100s  kubelet            Started container webserver\n"
May 13 23:21:42.710: INFO: Name:         netserver-1
Namespace:    nettest-1488
Priority:     0
Node:         node2/10.10.190.208
Start Time:   Fri, 13 May 2022 23:19:57 +0000
Labels:       selector-758ee24d-451c-4a9f-95b7-0aa0f5e4e056=true
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.4.29"
                    ],
                    "mac": "16:25:eb:cd:62:ec",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.4.29"
                    ],
                    "mac": "16:25:eb:cd:62:ec",
                    "default": true,
                    "dns": {}
                }]
              kubernetes.io/psp: collectd
Status:       Running
IP:           10.244.4.29
IPs:
  IP:  10.244.4.29
Containers:
  webserver:
    Container ID:  docker://682e80c8520ba36379866b4729906c3a90fbb80f9666ce630c7685a13cdeff20
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    0/TCP, 0/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
    State:          Running
      Started:      Fri, 13 May 2022 23:20:02 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9x5lf (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-9x5lf:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node2
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  105s  default-scheduler  Successfully assigned nettest-1488/netserver-1 to node2
  Normal  Pulling    101s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     100s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 487.838589ms
  Normal  Created    100s  kubelet            Created container webserver
  Normal  Started    100s  kubelet            Started container webserver

May 13 23:21:42.710: INFO: encountered error during dial (did not find expected responses... 
Tries 34
Command curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'
retrieved map[]
expected map[netserver-0:{} netserver-1:{}])
May 13 23:21:42.711: FAIL: failed dialing endpoint, did not find expected responses... 
Tries 34
Command curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'
retrieved map[]
expected map[netserver-0:{} netserver-1:{}]

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0004abe00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc0004abe00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc0004abe00, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "nettest-1488".
STEP: Found 15 events.
May 13 23:21:42.716: INFO: At 2022-05-13 23:19:57 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-1488/netserver-0 to node1
May 13 23:21:42.716: INFO: At 2022-05-13 23:19:57 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-1488/netserver-1 to node2
May 13 23:21:42.716: INFO: At 2022-05-13 23:20:01 +0000 UTC - event for netserver-0: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 13 23:21:42.716: INFO: At 2022-05-13 23:20:01 +0000 UTC - event for netserver-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 13 23:21:42.716: INFO: At 2022-05-13 23:20:02 +0000 UTC - event for netserver-0: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 612.607912ms
May 13 23:21:42.716: INFO: At 2022-05-13 23:20:02 +0000 UTC - event for netserver-1: {kubelet node2} Started: Started container webserver
May 13 23:21:42.716: INFO: At 2022-05-13 23:20:02 +0000 UTC - event for netserver-1: {kubelet node2} Created: Created container webserver
May 13 23:21:42.716: INFO: At 2022-05-13 23:20:02 +0000 UTC - event for netserver-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 487.838589ms
May 13 23:21:42.716: INFO: At 2022-05-13 23:20:03 +0000 UTC - event for netserver-0: {kubelet node1} Started: Started container webserver
May 13 23:21:42.716: INFO: At 2022-05-13 23:20:03 +0000 UTC - event for netserver-0: {kubelet node1} Created: Created container webserver
May 13 23:21:42.716: INFO: At 2022-05-13 23:20:19 +0000 UTC - event for test-container-pod: {default-scheduler } Scheduled: Successfully assigned nettest-1488/test-container-pod to node1
May 13 23:21:42.716: INFO: At 2022-05-13 23:20:21 +0000 UTC - event for test-container-pod: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 13 23:21:42.716: INFO: At 2022-05-13 23:20:21 +0000 UTC - event for test-container-pod: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 282.036312ms
May 13 23:21:42.716: INFO: At 2022-05-13 23:20:21 +0000 UTC - event for test-container-pod: {kubelet node1} Created: Created container webserver
May 13 23:21:42.716: INFO: At 2022-05-13 23:20:22 +0000 UTC - event for test-container-pod: {kubelet node1} Started: Started container webserver
May 13 23:21:42.719: INFO: POD                 NODE   PHASE    GRACE  CONDITIONS
May 13 23:21:42.719: INFO: netserver-0         node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:19:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:20:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:20:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:19:57 +0000 UTC  }]
May 13 23:21:42.719: INFO: netserver-1         node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:19:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:20:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:20:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:19:57 +0000 UTC  }]
May 13 23:21:42.719: INFO: test-container-pod  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:20:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:20:23 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:20:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:20:19 +0000 UTC  }]
May 13 23:21:42.719: INFO: 
May 13 23:21:42.724: INFO: 
Logging node info for node master1
May 13 23:21:42.727: INFO: Node Info: &Node{ObjectMeta:{master1    e893469e-45f9-457b-9379-276178f6209f 74831 0 2022-05-13 19:57:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-13 19:57:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-13 19:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-13 20:05:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-13 20:09:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:21:35 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:21:35 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:21:35 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:21:35 +0000 UTC,LastTransitionTime:2022-05-13 20:03:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5bc4f1fb629f4c3bb455995355cca59c,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:196d75bb-273f-44bf-9b96-1cfef0d34445,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:21:42.727: INFO: 
Logging kubelet events for node master1
May 13 23:21:42.730: INFO: 
Logging pods the kubelet thinks is on node master1
May 13 23:21:42.742: INFO: kube-multus-ds-amd64-ts4fz started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.742: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:21:42.742: INFO: container-registry-65d7c44b96-gqdgz started at 2022-05-13 20:05:09 +0000 UTC (0+2 container statuses recorded)
May 13 23:21:42.742: INFO: 	Container docker-registry ready: true, restart count 0
May 13 23:21:42.742: INFO: 	Container nginx ready: true, restart count 0
May 13 23:21:42.742: INFO: kube-apiserver-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.742: INFO: 	Container kube-apiserver ready: true, restart count 0
May 13 23:21:42.742: INFO: kube-controller-manager-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.742: INFO: 	Container kube-controller-manager ready: true, restart count 2
May 13 23:21:42.742: INFO: kube-scheduler-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.742: INFO: 	Container kube-scheduler ready: true, restart count 0
May 13 23:21:42.742: INFO: kube-flannel-jw4mp started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:21:42.742: INFO: 	Init container install-cni ready: true, restart count 2
May 13 23:21:42.742: INFO: 	Container kube-flannel ready: true, restart count 1
May 13 23:21:42.742: INFO: kube-proxy-6q994 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.742: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:21:42.742: INFO: node-feature-discovery-controller-cff799f9f-k2qmv started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.742: INFO: 	Container nfd-controller ready: true, restart count 0
May 13 23:21:42.742: INFO: node-exporter-2jxfg started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:21:42.742: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:21:42.743: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:21:42.842: INFO: 
Latency metrics for node master1
May 13 23:21:42.842: INFO: 
Logging node info for node master2
May 13 23:21:42.844: INFO: Node Info: &Node{ObjectMeta:{master2    6394fb00-7ac6-4b0d-af37-0e7baf892992 74840 0 2022-05-13 19:58:07 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-13 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:21:36 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:21:36 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:21:36 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:21:36 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0c26206724384f32848637ec210bf517,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:87b6bd6a-947f-4fda-a24f-503738da156e,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:21:42.844: INFO: 
Logging kubelet events for node master2
May 13 23:21:42.846: INFO: 
Logging pods the kubelet thinks is on node master2
May 13 23:21:42.868: INFO: kube-controller-manager-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.868: INFO: 	Container kube-controller-manager ready: true, restart count 2
May 13 23:21:42.868: INFO: kube-scheduler-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.868: INFO: 	Container kube-scheduler ready: true, restart count 2
May 13 23:21:42.868: INFO: node-exporter-zmlpx started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:21:42.868: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:21:42.868: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:21:42.868: INFO: kube-apiserver-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.868: INFO: 	Container kube-apiserver ready: true, restart count 0
May 13 23:21:42.868: INFO: kube-proxy-jxbwz started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.868: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:21:42.868: INFO: kube-flannel-gndff started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:21:42.868: INFO: 	Init container install-cni ready: true, restart count 2
May 13 23:21:42.868: INFO: 	Container kube-flannel ready: true, restart count 1
May 13 23:21:42.868: INFO: kube-multus-ds-amd64-w98wb started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.868: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:21:42.868: INFO: coredns-8474476ff8-m6b8s started at 2022-05-13 20:01:00 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.868: INFO: 	Container coredns ready: true, restart count 1
May 13 23:21:42.951: INFO: 
Latency metrics for node master2
May 13 23:21:42.951: INFO: 
Logging node info for node master3
May 13 23:21:42.954: INFO: Node Info: &Node{ObjectMeta:{master3    11a40d0b-d9d1-449f-a587-cc897edbfd9b 74785 0 2022-05-13 19:58:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-13 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:21:33 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:21:33 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:21:33 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:21:33 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:96fba609db464f479c06da20414d1979,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:55d995b3-c2cc-4b60-96f4-5a990abd0c48,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:21:42.955: INFO: 
Logging kubelet events for node master3
May 13 23:21:42.957: INFO: 
Logging pods the kubelet thinks is on node master3
May 13 23:21:42.971: INFO: kube-multus-ds-amd64-ffgk5 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.971: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:21:42.971: INFO: coredns-8474476ff8-x29nh started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.971: INFO: 	Container coredns ready: true, restart count 1
May 13 23:21:42.971: INFO: kube-apiserver-master3 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.971: INFO: 	Container kube-apiserver ready: true, restart count 0
May 13 23:21:42.971: INFO: kube-scheduler-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.971: INFO: 	Container kube-scheduler ready: true, restart count 2
May 13 23:21:42.971: INFO: kube-proxy-6fl99 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.971: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:21:42.971: INFO: kube-flannel-p5mwf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:21:42.972: INFO: 	Init container install-cni ready: true, restart count 0
May 13 23:21:42.972: INFO: 	Container kube-flannel ready: true, restart count 1
May 13 23:21:42.972: INFO: dns-autoscaler-7df78bfcfb-wfmpz started at 2022-05-13 20:01:02 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.972: INFO: 	Container autoscaler ready: true, restart count 1
May 13 23:21:42.972: INFO: node-exporter-qh76s started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:21:42.972: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:21:42.972: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:21:42.972: INFO: kube-controller-manager-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:42.972: INFO: 	Container kube-controller-manager ready: true, restart count 2
May 13 23:21:43.051: INFO: 
Latency metrics for node master3
May 13 23:21:43.051: INFO: 
Logging node info for node node1
May 13 23:21:43.053: INFO: Node Info: &Node{ObjectMeta:{node1    dca01e5e-a739-4ccc-b102-bfd163c4b832 74832 0 2022-05-13 19:59:24 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 22:26:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-05-13 23:04:48 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:20 +0000 UTC,LastTransitionTime:2022-05-13 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:21:35 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:21:35 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:21:35 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:21:35 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f73ea6ef9607468c91208265a5b02a1b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ff172cf5-ca8f-45aa-ade2-6dea8be1d249,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003949300,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:2c72b42c3679c1c819d46296c4e79e69b2616fa28bea92e61d358980e18c9751 nginx:latest],SizeBytes:141522805,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:21:43.054: INFO: 
Logging kubelet events for node node1
May 13 23:21:43.056: INFO: 
Logging pods the kubelet thinks is on node node1
May 13 23:21:43.268: INFO: kubernetes-dashboard-785dcbb76d-tcgth started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container kubernetes-dashboard ready: true, restart count 2
May 13 23:21:43.268: INFO: up-down-2-rr2wz started at 2022-05-13 23:21:29 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container up-down-2 ready: true, restart count 0
May 13 23:21:43.268: INFO: externalip-test-sh867 started at 2022-05-13 23:21:42 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container externalip-test ready: false, restart count 0
May 13 23:21:43.268: INFO: netserver-0 started at  (0+0 container statuses recorded)
May 13 23:21:43.268: INFO: kube-flannel-xfj7m started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Init container install-cni ready: true, restart count 2
May 13 23:21:43.268: INFO: 	Container kube-flannel ready: true, restart count 2
May 13 23:21:43.268: INFO: cmk-tfblh started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container nodereport ready: true, restart count 0
May 13 23:21:43.268: INFO: 	Container reconcile ready: true, restart count 0
May 13 23:21:43.268: INFO: up-down-2-69k5s started at 2022-05-13 23:21:29 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container up-down-2 ready: true, restart count 0
May 13 23:21:43.268: INFO: service-headless-xflzh started at 2022-05-13 23:20:45 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container service-headless ready: true, restart count 0
May 13 23:21:43.268: INFO: cmk-webhook-6c9d5f8578-59hj6 started at 2022-05-13 20:13:16 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container cmk-webhook ready: true, restart count 0
May 13 23:21:43.268: INFO: nodeport-update-service-k576q started at 2022-05-13 23:19:56 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container nodeport-update-service ready: true, restart count 0
May 13 23:21:43.268: INFO: netserver-0 started at 2022-05-13 23:19:57 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container webserver ready: true, restart count 0
May 13 23:21:43.268: INFO: up-down-1-fql58 started at 2022-05-13 23:21:23 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container up-down-1 ready: true, restart count 0
May 13 23:21:43.268: INFO: externalip-test-tf8cs started at 2022-05-13 23:21:42 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container externalip-test ready: false, restart count 0
May 13 23:21:43.268: INFO: collectd-p26j2 started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container collectd ready: true, restart count 0
May 13 23:21:43.268: INFO: 	Container collectd-exporter ready: true, restart count 0
May 13 23:21:43.268: INFO: 	Container rbac-proxy ready: true, restart count 0
May 13 23:21:43.268: INFO: service-proxy-toggled-8b4db started at 2022-05-13 23:20:13 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container service-proxy-toggled ready: false, restart count 0
May 13 23:21:43.268: INFO: service-headless-zmq4r started at 2022-05-13 23:20:45 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container service-headless ready: true, restart count 0
May 13 23:21:43.268: INFO: kube-multus-ds-amd64-dtt2x started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:21:43.268: INFO: node-feature-discovery-worker-l459c started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container nfd-worker ready: true, restart count 0
May 13 23:21:43.268: INFO: node-exporter-42x8d started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:21:43.268: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:21:43.268: INFO: verify-service-up-host-exec-pod started at 2022-05-13 23:21:35 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container agnhost-container ready: true, restart count 0
May 13 23:21:43.268: INFO: netserver-0 started at 2022-05-13 23:21:09 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container webserver ready: true, restart count 0
May 13 23:21:43.268: INFO: up-down-2-sr9jb started at 2022-05-13 23:21:29 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container up-down-2 ready: true, restart count 0
May 13 23:21:43.268: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container kube-sriovdp ready: true, restart count 0
May 13 23:21:43.268: INFO: nodeport-update-service-d24hz started at 2022-05-13 23:19:56 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container nodeport-update-service ready: true, restart count 0
May 13 23:21:43.268: INFO: netserver-0 started at 2022-05-13 23:21:38 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container webserver ready: false, restart count 0
May 13 23:21:43.268: INFO: test-container-pod started at 2022-05-13 23:20:19 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container webserver ready: true, restart count 0
May 13 23:21:43.268: INFO: netserver-0 started at 2022-05-13 23:21:11 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container webserver ready: true, restart count 0
May 13 23:21:43.268: INFO: netserver-0 started at 2022-05-13 23:21:26 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container webserver ready: false, restart count 0
May 13 23:21:43.268: INFO: e2e-net-exec started at 2022-05-13 23:21:38 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container e2e-net-exec ready: false, restart count 0
May 13 23:21:43.268: INFO: kube-proxy-rs2zg started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:21:43.268: INFO: cmk-init-discover-node1-m2p59 started at 2022-05-13 20:12:33 +0000 UTC (0+3 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container discover ready: false, restart count 0
May 13 23:21:43.268: INFO: 	Container init ready: false, restart count 0
May 13 23:21:43.268: INFO: 	Container install ready: false, restart count 0
May 13 23:21:43.268: INFO: prometheus-k8s-0 started at 2022-05-13 20:14:32 +0000 UTC (0+4 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container config-reloader ready: true, restart count 0
May 13 23:21:43.268: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
May 13 23:21:43.268: INFO: 	Container grafana ready: true, restart count 0
May 13 23:21:43.268: INFO: 	Container prometheus ready: true, restart count 1
May 13 23:21:43.268: INFO: iperf2-clients-fsvr5 started at 2022-05-13 23:21:05 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container iperf2-client ready: true, restart count 0
May 13 23:21:43.268: INFO: pod-client started at 2022-05-13 23:21:09 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container pod-client ready: true, restart count 0
May 13 23:21:43.268: INFO: nginx-proxy-node1 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container nginx-proxy ready: true, restart count 2
May 13 23:21:43.268: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 2
May 13 23:21:43.268: INFO: startup-script started at 2022-05-13 23:20:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container startup-script ready: true, restart count 0
May 13 23:21:43.268: INFO: up-down-1-w4djp started at 2022-05-13 23:21:23 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:43.268: INFO: 	Container up-down-1 ready: true, restart count 0
May 13 23:21:44.761: INFO: 
Latency metrics for node node1
May 13 23:21:44.761: INFO: 
Logging node info for node node2
May 13 23:21:44.764: INFO: Node Info: &Node{ObjectMeta:{node2    461ea6c2-df11-4be4-802e-29bddc0f2535 74935 0 2022-05-13 19:59:24 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 22:24:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:21:41 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:21:41 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:21:41 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:21:41 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b36a7c38429c4cc598bd0e6ca8278ad0,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:4fcc32fc-d037-4cf9-a62f-f372f6cc17cb,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[nginx@sha256:2c72b42c3679c1c819d46296c4e79e69b2616fa28bea92e61d358980e18c9751 nginx:latest],SizeBytes:141522805,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:21:44.765: INFO: 
Logging kubelet events for node node2
May 13 23:21:44.767: INFO: 
Logging pods the kubelet thinks is on node node2
May 13 23:21:44.799: INFO: node-exporter-n5snd started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:21:44.799: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:21:44.799: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:21:44.799: INFO: netserver-1 started at 2022-05-13 23:21:26 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.799: INFO: 	Container webserver ready: false, restart count 0
May 13 23:21:44.799: INFO: kube-multus-ds-amd64-l7nx2 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.799: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:21:44.799: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 started at 2022-05-13 20:17:23 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.799: INFO: 	Container tas-extender ready: true, restart count 0
May 13 23:21:44.799: INFO: execpodswfnp started at 2022-05-13 23:20:05 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.799: INFO: 	Container agnhost-container ready: true, restart count 0
May 13 23:21:44.799: INFO: boom-server started at 2022-05-13 23:20:25 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.799: INFO: 	Container boom-server ready: true, restart count 0
May 13 23:21:44.799: INFO: node-feature-discovery-worker-cxxqf started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.799: INFO: 	Container nfd-worker ready: true, restart count 0
May 13 23:21:44.799: INFO: netserver-1 started at 2022-05-13 23:21:38 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.799: INFO: 	Container webserver ready: false, restart count 0
May 13 23:21:44.799: INFO: netserver-1 started at 2022-05-13 23:19:57 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.799: INFO: 	Container webserver ready: true, restart count 0
May 13 23:21:44.799: INFO: up-down-1-s25ql started at 2022-05-13 23:21:23 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.799: INFO: 	Container up-down-1 ready: true, restart count 0
May 13 23:21:44.799: INFO: iperf2-clients-947tv started at 2022-05-13 23:21:05 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.799: INFO: 	Container iperf2-client ready: true, restart count 0
May 13 23:21:44.799: INFO: netserver-1 started at 2022-05-13 23:21:11 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.799: INFO: 	Container webserver ready: true, restart count 0
May 13 23:21:44.799: INFO: kube-flannel-lv9xf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:21:44.799: INFO: 	Init container install-cni ready: true, restart count 2
May 13 23:21:44.799: INFO: 	Container kube-flannel ready: true, restart count 2
May 13 23:21:44.799: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.799: INFO: 	Container kube-sriovdp ready: true, restart count 0
May 13 23:21:44.799: INFO: collectd-9gqhr started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container collectd ready: true, restart count 0
May 13 23:21:44.800: INFO: 	Container collectd-exporter ready: true, restart count 0
May 13 23:21:44.800: INFO: 	Container rbac-proxy ready: true, restart count 0
May 13 23:21:44.800: INFO: service-headless-toggled-hd9jd started at 2022-05-13 23:20:54 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container service-headless-toggled ready: true, restart count 0
May 13 23:21:44.800: INFO: cmk-qhbd6 started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container nodereport ready: true, restart count 0
May 13 23:21:44.800: INFO: 	Container reconcile ready: true, restart count 0
May 13 23:21:44.800: INFO: prometheus-operator-585ccfb458-vrwnp started at 2022-05-13 20:14:11 +0000 UTC (0+2 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:21:44.800: INFO: 	Container prometheus-operator ready: true, restart count 0
May 13 23:21:44.800: INFO: test-container-pod started at 2022-05-13 23:21:15 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container webserver ready: false, restart count 0
May 13 23:21:44.800: INFO: test-container-pod started at 2022-05-13 23:21:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container webserver ready: true, restart count 0
May 13 23:21:44.800: INFO: verify-service-up-host-exec-pod started at 2022-05-13 23:21:36 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container agnhost-container ready: true, restart count 0
May 13 23:21:44.800: INFO: host-test-container-pod started at 2022-05-13 23:21:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container agnhost-container ready: true, restart count 0
May 13 23:21:44.800: INFO: nginx-proxy-node2 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container nginx-proxy ready: true, restart count 2
May 13 23:21:44.800: INFO: kube-proxy-wkzbm started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:21:44.800: INFO: cmk-init-discover-node2-hm7r7 started at 2022-05-13 20:12:52 +0000 UTC (0+3 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container discover ready: false, restart count 0
May 13 23:21:44.800: INFO: 	Container init ready: false, restart count 0
May 13 23:21:44.800: INFO: 	Container install ready: false, restart count 0
May 13 23:21:44.800: INFO: service-headless-toggled-f72sm started at 2022-05-13 23:20:54 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container service-headless-toggled ready: true, restart count 0
May 13 23:21:44.800: INFO: service-headless-toggled-v8nlc started at 2022-05-13 23:20:54 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container service-headless-toggled ready: true, restart count 0
May 13 23:21:44.800: INFO: iperf2-server-deployment-59979d877-c82zw started at 2022-05-13 23:21:01 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container iperf2-server ready: true, restart count 0
May 13 23:21:44.800: INFO: pod-server-1 started at 2022-05-13 23:21:15 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container agnhost-container ready: true, restart count 0
May 13 23:21:44.800: INFO: netserver-1 started at  (0+0 container statuses recorded)
May 13 23:21:44.800: INFO: verify-service-up-exec-pod-xxzwb started at 2022-05-13 23:21:38 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container agnhost-container ready: false, restart count 0
May 13 23:21:44.800: INFO: service-headless-kgn8r started at 2022-05-13 23:20:45 +0000 UTC (0+1 container statuses recorded)
May 13 23:21:44.800: INFO: 	Container service-headless ready: true, restart count 0
May 13 23:21:45.783: INFO: 
Latency metrics for node node2
May 13 23:21:45.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1488" for this suite.


• Failure [108.759 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256

    May 13 23:21:42.711: failed dialing endpoint, did not find expected responses... 
    Tries 34
    Command curl -g -q -s 'http://10.244.3.146:8080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=30729&tries=1'
    retrieved map[]
    expected map[netserver-0:{} netserver-1:{}]

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp","total":-1,"completed":1,"skipped":170,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp"]}

SSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:21:26.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242
STEP: Performing setup for networking test in namespace nettest-7159
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 13 23:21:26.591: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:21:26.623: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:28.629: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:30.629: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:32.628: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:34.630: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:36.627: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:38.631: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:40.627: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:42.628: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:44.627: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:46.628: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:48.630: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 13 23:21:48.635: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 13 23:21:50.638: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 13 23:21:52.639: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 13 23:22:00.663: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 13 23:22:00.663: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:22:00.670: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:22:00.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7159" for this suite.


S [SKIPPING] [34.244 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:21:38.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kube-proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
May 13 23:21:38.178: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:40.182: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:42.182: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:44.184: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:46.181: INFO: The status of Pod e2e-net-exec is Running (Ready = true)
STEP: Launching a server daemon on node node2 (node ip: 10.10.190.208, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
May 13 23:21:46.195: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:48.199: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:50.198: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:52.201: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:54.201: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:56.198: INFO: The status of Pod e2e-net-server is Running (Ready = true)
STEP: Launching a client connection on node node1 (node ip: 10.10.190.207, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
May 13 23:21:58.218: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:00.221: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:02.221: INFO: The status of Pod e2e-net-client is Running (Ready = true)
STEP: Checking conntrack entries for the timeout
May 13 23:22:02.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kube-proxy-6015 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 10.10.190.208 | grep -m 1 'CLOSE_WAIT.*dport=11302' '
May 13 23:22:02.485: INFO: stderr: "+ grep -m 1 CLOSE_WAIT.*dport=11302\n+ conntrack -L -f ipv4 -d 10.10.190.208\nconntrack v1.4.5 (conntrack-tools): 7 flow entries have been shown.\n"
May 13 23:22:02.485: INFO: stdout: "tcp      6 3597 CLOSE_WAIT src=10.244.3.188 dst=10.10.190.208 sport=58362 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=1398 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1\n"
May 13 23:22:02.485: INFO: conntrack entry for node 10.10.190.208 and port 11302:  tcp      6 3597 CLOSE_WAIT src=10.244.3.188 dst=10.10.190.208 sport=58362 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=1398 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1

[AfterEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:22:02.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kube-proxy-6015" for this suite.


• [SLOW TEST:24.358 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":6,"skipped":1328,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:21:42.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
STEP: creating service externalip-test with type=clusterIP in namespace services-4354
STEP: creating replication controller externalip-test in namespace services-4354
I0513 23:21:42.050973      34 runners.go:190] Created replication controller with name: externalip-test, namespace: services-4354, replica count: 2
I0513 23:21:45.102475      34 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0513 23:21:48.104690      34 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 13 23:21:48.104: INFO: Creating new exec pod
May 13 23:21:59.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4354 exec execpod2smkq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
May 13 23:21:59.390: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalip-test 80\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
May 13 23:21:59.390: INFO: stdout: ""
May 13 23:22:00.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4354 exec execpod2smkq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
May 13 23:22:00.630: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalip-test 80\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
May 13 23:22:00.630: INFO: stdout: ""
May 13 23:22:01.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4354 exec execpod2smkq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
May 13 23:22:01.625: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalip-test 80\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
May 13 23:22:01.625: INFO: stdout: ""
May 13 23:22:02.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4354 exec execpod2smkq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
May 13 23:22:03.086: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalip-test 80\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
May 13 23:22:03.086: INFO: stdout: "externalip-test-tf8cs"
May 13 23:22:03.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4354 exec execpod2smkq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.26.14 80'
May 13 23:22:03.364: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.26.14 80\nConnection to 10.233.26.14 80 port [tcp/http] succeeded!\n"
May 13 23:22:03.364: INFO: stdout: "externalip-test-sh867"
May 13 23:22:03.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4354 exec execpod2smkq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
May 13 23:22:03.615: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 203.0.113.250 80\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n"
May 13 23:22:03.615: INFO: stdout: "externalip-test-tf8cs"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:22:03.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4354" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:21.607 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":3,"skipped":776,"failed":0}

SSS
------------------------------
May 13 23:22:03.635: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:20:45.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
STEP: creating service-headless in namespace services-8391
STEP: creating service service-headless in namespace services-8391
STEP: creating replication controller service-headless in namespace services-8391
I0513 23:20:45.192471      27 runners.go:190] Created replication controller with name: service-headless, namespace: services-8391, replica count: 3
I0513 23:20:48.243712      27 runners.go:190] service-headless Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0513 23:20:51.244451      27 runners.go:190] service-headless Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0513 23:20:54.246963      27 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-8391
STEP: creating service service-headless-toggled in namespace services-8391
STEP: creating replication controller service-headless-toggled in namespace services-8391
I0513 23:20:54.261968      27 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-8391, replica count: 3
I0513 23:20:57.313106      27 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0513 23:21:00.313469      27 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0513 23:21:03.315700      27 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
May 13 23:21:03.319: INFO: Creating new host exec pod
May 13 23:21:03.332: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:05.335: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 13 23:21:05.336: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 13 23:21:11.351: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.31:80 2>&1 || true; echo; done" in pod services-8391/verify-service-up-host-exec-pod
May 13 23:21:11.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8391 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.31:80 2>&1 || true; echo; done'
May 13 23:21:11.714: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n"
May 13 23:21:11.714: INFO: stdout: "service-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\n"
May 13 23:21:11.715: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.31:80 2>&1 || true; echo; done" in pod services-8391/verify-service-up-exec-pod-5nf4j
May 13 23:21:11.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8391 exec verify-service-up-exec-pod-5nf4j -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.31:80 2>&1 || true; echo; done'
May 13 23:21:12.117: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n"
May 13 23:21:12.117: INFO: stdout: "service-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-8391
STEP: Deleting pod verify-service-up-exec-pod-5nf4j in namespace services-8391
STEP: verifying service-headless is not up
May 13 23:21:12.129: INFO: Creating new host exec pod
May 13 23:21:12.141: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:14.146: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:16.144: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May 13 23:21:16.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8391 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.3.155:80 && echo service-down-failed'
May 13 23:21:18.430: INFO: rc: 28
May 13 23:21:18.430: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.3.155:80 && echo service-down-failed" in pod services-8391/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8391 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.3.155:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.3.155:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-8391
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
May 13 23:21:18.447: INFO: Creating new host exec pod
May 13 23:21:18.459: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:20.463: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:22.464: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:24.463: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:26.463: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:28.464: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:30.466: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:32.465: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:34.463: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May 13 23:21:34.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8391 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.36.31:80 && echo service-down-failed'
May 13 23:21:36.772: INFO: rc: 28
May 13 23:21:36.772: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.36.31:80 && echo service-down-failed" in pod services-8391/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8391 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.36.31:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.36.31:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-8391
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
May 13 23:21:36.786: INFO: Creating new host exec pod
May 13 23:21:36.802: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:38.806: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:40.807: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:42.807: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:44.805: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 13 23:21:44.805: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 13 23:21:54.820: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.31:80 2>&1 || true; echo; done" in pod services-8391/verify-service-up-host-exec-pod
May 13 23:21:54.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8391 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.31:80 2>&1 || true; echo; done'
May 13 23:21:55.183: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n"
May 13 23:21:55.184: INFO: stdout: "service-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\n"
May 13 23:21:55.184: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.31:80 2>&1 || true; echo; done" in pod services-8391/verify-service-up-exec-pod-wzk8b
May 13 23:21:55.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8391 exec verify-service-up-exec-pod-wzk8b -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.31:80 2>&1 || true; echo; done'
May 13 23:21:55.528: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.31:80\n+ echo\n"
May 13 23:21:55.529: INFO: stdout: "service-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-hd9jd\nservice-headless-toggled-v8nlc\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-hd9jd\nservice-headless-toggled-f72sm\nservice-headless-toggled-v8nlc\nservice-headless-toggled-f72sm\nservice-headless-toggled-f72sm\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-8391
STEP: Deleting pod verify-service-up-exec-pod-wzk8b in namespace services-8391
STEP: verifying service-headless is still not up
May 13 23:21:55.541: INFO: Creating new host exec pod
May 13 23:21:55.553: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:57.559: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:59.557: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:01.557: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May 13 23:22:01.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8391 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.3.155:80 && echo service-down-failed'
May 13 23:22:03.819: INFO: rc: 28
May 13 23:22:03.819: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.3.155:80 && echo service-down-failed" in pod services-8391/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8391 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.3.155:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.3.155:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-8391
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:22:03.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8391" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:78.671 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":2,"skipped":355,"failed":0}
May 13 23:22:03.836: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:22:00.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
STEP: Running container which tries to connect to 8.8.8.8
May 13 23:22:01.070: INFO: Waiting up to 5m0s for pod "connectivity-test" in namespace "nettest-7387" to be "Succeeded or Failed"
May 13 23:22:01.072: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1.854629ms
May 13 23:22:03.076: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005673198s
May 13 23:22:05.080: INFO: Pod "connectivity-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010131805s
STEP: Saw pod success
May 13 23:22:05.080: INFO: Pod "connectivity-test" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:22:05.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7387" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]","total":-1,"completed":3,"skipped":646,"failed":0}
May 13 23:22:05.091: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:21:38.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
STEP: Performing setup for networking test in namespace nettest-665
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 13 23:21:38.307: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:21:38.338: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:40.341: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:42.343: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:44.343: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:46.341: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:48.342: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:50.341: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:52.343: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:54.342: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:56.343: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:58.343: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:22:00.344: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 13 23:22:00.349: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 13 23:22:02.353: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 13 23:22:06.375: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 13 23:22:06.375: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:22:06.383: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:22:06.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-665" for this suite.


S [SKIPPING] [28.207 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
May 13 23:22:06.395: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:22:02.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8772.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8772.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8772.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8772.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8772.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8772.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 13 23:22:08.914: INFO: DNS probes using dns-8772/dns-test-4a5ab64e-d107-4d0c-97e8-07d7e1b73aed succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:22:08.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8772" for this suite.


• [SLOW TEST:6.119 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":7,"skipped":1503,"failed":0}
May 13 23:22:08.930: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:21:42.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351
STEP: Performing setup for networking test in namespace nettest-8094
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 13 23:21:42.865: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:21:42.897: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:44.900: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:46.900: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:48.902: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:50.904: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:52.902: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:54.901: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:56.901: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:58.901: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:22:00.901: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:22:02.902: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:22:04.900: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 13 23:22:04.905: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 13 23:22:12.926: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 13 23:22:12.926: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:22:12.933: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:22:12.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8094" for this suite.


S [SKIPPING] [30.193 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
May 13 23:22:12.944: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:19:56.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W0513 23:19:56.345472      24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May 13 23:19:56.345: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May 13 23:19:56.349: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-1389
May 13 23:19:56.357: INFO: Service Port TCP: 80
STEP: changing the TCP service to type=NodePort
STEP: creating replication controller nodeport-update-service in namespace services-1389
I0513 23:19:56.384433      24 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-1389, replica count: 2
I0513 23:19:59.435723      24 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0513 23:20:02.438986      24 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0513 23:20:05.439605      24 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 13 23:20:05.439: INFO: Creating new exec pod
May 13 23:20:12.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
May 13 23:20:12.837: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-update-service 80\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
May 13 23:20:12.838: INFO: stdout: "nodeport-update-service-k576q"
May 13 23:20:12.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.13.195 80'
May 13 23:20:13.104: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.13.195 80\nConnection to 10.233.13.195 80 port [tcp/http] succeeded!\n"
May 13 23:20:13.105: INFO: stdout: "nodeport-update-service-d24hz"
May 13 23:20:13.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:13.359: INFO: rc: 1
May 13 23:20:13.359: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:14.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:14.651: INFO: rc: 1
May 13 23:20:14.651: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:15.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:15.618: INFO: rc: 1
May 13 23:20:15.618: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:16.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:16.902: INFO: rc: 1
May 13 23:20:16.902: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:17.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:17.623: INFO: rc: 1
May 13 23:20:17.623: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:18.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:18.923: INFO: rc: 1
May 13 23:20:18.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:19.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:19.664: INFO: rc: 1
May 13 23:20:19.664: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:20.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:21.056: INFO: rc: 1
May 13 23:20:21.056: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:21.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:22.045: INFO: rc: 1
May 13 23:20:22.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:22.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:22.727: INFO: rc: 1
May 13 23:20:22.727: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:23.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:23.596: INFO: rc: 1
May 13 23:20:23.596: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:24.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:24.631: INFO: rc: 1
May 13 23:20:24.631: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:25.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:25.609: INFO: rc: 1
May 13 23:20:25.609: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:26.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:26.659: INFO: rc: 1
May 13 23:20:26.659: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:27.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:27.608: INFO: rc: 1
May 13 23:20:27.608: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:28.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:28.662: INFO: rc: 1
May 13 23:20:28.662: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:29.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:29.803: INFO: rc: 1
May 13 23:20:29.803: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31420
+ echo hostName
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:30.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:30.771: INFO: rc: 1
May 13 23:20:30.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:31.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:31.757: INFO: rc: 1
May 13 23:20:31.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:32.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:33.015: INFO: rc: 1
May 13 23:20:33.015: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:33.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:33.728: INFO: rc: 1
May 13 23:20:33.728: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:34.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:34.650: INFO: rc: 1
May 13 23:20:34.650: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:35.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:35.648: INFO: rc: 1
May 13 23:20:35.648: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:36.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:36.722: INFO: rc: 1
May 13 23:20:36.722: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:37.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:37.658: INFO: rc: 1
May 13 23:20:37.659: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:38.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:38.929: INFO: rc: 1
May 13 23:20:38.929: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:39.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:40.201: INFO: rc: 1
May 13 23:20:40.201: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:40.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:41.004: INFO: rc: 1
May 13 23:20:41.004: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:41.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:41.658: INFO: rc: 1
May 13 23:20:41.658: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:42.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:42.650: INFO: rc: 1
May 13 23:20:42.650: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31420
+ echo hostName
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:43.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:43.604: INFO: rc: 1
May 13 23:20:43.604: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:44.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:44.832: INFO: rc: 1
May 13 23:20:44.832: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:45.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:45.674: INFO: rc: 1
May 13 23:20:45.674: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:46.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:46.760: INFO: rc: 1
May 13 23:20:46.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:47.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:47.610: INFO: rc: 1
May 13 23:20:47.610: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:48.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:48.735: INFO: rc: 1
May 13 23:20:48.735: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:49.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:49.598: INFO: rc: 1
May 13 23:20:49.598: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:50.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:50.845: INFO: rc: 1
May 13 23:20:50.845: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:51.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:51.599: INFO: rc: 1
May 13 23:20:51.599: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:52.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:52.617: INFO: rc: 1
May 13 23:20:52.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:53.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:53.622: INFO: rc: 1
May 13 23:20:53.622: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:54.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:54.807: INFO: rc: 1
May 13 23:20:54.807: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:55.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:55.745: INFO: rc: 1
May 13 23:20:55.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:56.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:56.993: INFO: rc: 1
May 13 23:20:56.993: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:57.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:57.664: INFO: rc: 1
May 13 23:20:57.664: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:58.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:58.907: INFO: rc: 1
May 13 23:20:58.908: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:20:59.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:20:59.619: INFO: rc: 1
May 13 23:20:59.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:00.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:00.582: INFO: rc: 1
May 13 23:21:00.582: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:01.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:01.621: INFO: rc: 1
May 13 23:21:01.621: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:02.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:02.653: INFO: rc: 1
May 13 23:21:02.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:03.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:03.674: INFO: rc: 1
May 13 23:21:03.674: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:04.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:04.618: INFO: rc: 1
May 13 23:21:04.618: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:05.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:05.665: INFO: rc: 1
May 13 23:21:05.665: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:06.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:06.790: INFO: rc: 1
May 13 23:21:06.790: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:07.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:07.801: INFO: rc: 1
May 13 23:21:07.801: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:08.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:08.834: INFO: rc: 1
May 13 23:21:08.834: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:09.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:09.909: INFO: rc: 1
May 13 23:21:09.909: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:10.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:10.636: INFO: rc: 1
May 13 23:21:10.636: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:11.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:11.623: INFO: rc: 1
May 13 23:21:11.623: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:12.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:12.797: INFO: rc: 1
May 13 23:21:12.798: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:13.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:13.724: INFO: rc: 1
May 13 23:21:13.724: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:14.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:14.798: INFO: rc: 1
May 13 23:21:14.798: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:15.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:15.743: INFO: rc: 1
May 13 23:21:15.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:16.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:16.929: INFO: rc: 1
May 13 23:21:16.929: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:17.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:18.301: INFO: rc: 1
May 13 23:21:18.301: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:18.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:19.108: INFO: rc: 1
May 13 23:21:19.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:19.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:19.712: INFO: rc: 1
May 13 23:21:19.712: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:20.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:20.880: INFO: rc: 1
May 13 23:21:20.880: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:21.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:21.764: INFO: rc: 1
May 13 23:21:21.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:22.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:22.625: INFO: rc: 1
May 13 23:21:22.625: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:23.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:23.645: INFO: rc: 1
May 13 23:21:23.645: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:24.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:24.630: INFO: rc: 1
May 13 23:21:24.630: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:25.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:25.618: INFO: rc: 1
May 13 23:21:25.618: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:26.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:26.594: INFO: rc: 1
May 13 23:21:26.594: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:27.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:28.085: INFO: rc: 1
May 13 23:21:28.085: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:28.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:28.672: INFO: rc: 1
May 13 23:21:28.672: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:29.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:29.707: INFO: rc: 1
May 13 23:21:29.707: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:30.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:30.648: INFO: rc: 1
May 13 23:21:30.648: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:31.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:31.629: INFO: rc: 1
May 13 23:21:31.630: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:32.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:32.652: INFO: rc: 1
May 13 23:21:32.653: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:33.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:33.763: INFO: rc: 1
May 13 23:21:33.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:34.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:35.165: INFO: rc: 1
May 13 23:21:35.165: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:35.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:35.698: INFO: rc: 1
May 13 23:21:35.698: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:36.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:36.670: INFO: rc: 1
May 13 23:21:36.670: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:37.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:37.642: INFO: rc: 1
May 13 23:21:37.642: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:38.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:38.604: INFO: rc: 1
May 13 23:21:38.604: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:39.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:40.150: INFO: rc: 1
May 13 23:21:40.150: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:40.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:41.111: INFO: rc: 1
May 13 23:21:41.111: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:41.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:41.915: INFO: rc: 1
May 13 23:21:41.915: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:42.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:42.648: INFO: rc: 1
May 13 23:21:42.648: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:43.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:43.782: INFO: rc: 1
May 13 23:21:43.782: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:44.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:44.629: INFO: rc: 1
May 13 23:21:44.629: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:45.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:45.675: INFO: rc: 1
May 13 23:21:45.675: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:46.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:46.912: INFO: rc: 1
May 13 23:21:46.912: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:47.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:47.915: INFO: rc: 1
May 13 23:21:47.915: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:48.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:48.785: INFO: rc: 1
May 13 23:21:48.785: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:49.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:49.827: INFO: rc: 1
May 13 23:21:49.827: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:50.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:50.968: INFO: rc: 1
May 13 23:21:50.968: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:51.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:51.907: INFO: rc: 1
May 13 23:21:51.907: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:52.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:52.646: INFO: rc: 1
May 13 23:21:52.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:53.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:53.715: INFO: rc: 1
May 13 23:21:53.715: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:54.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:54.601: INFO: rc: 1
May 13 23:21:54.601: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:55.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:55.628: INFO: rc: 1
May 13 23:21:55.628: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:56.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:56.624: INFO: rc: 1
May 13 23:21:56.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:57.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:57.613: INFO: rc: 1
May 13 23:21:57.613: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:58.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:58.615: INFO: rc: 1
May 13 23:21:58.615: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:21:59.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:21:59.598: INFO: rc: 1
May 13 23:21:59.598: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:22:00.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:22:00.601: INFO: rc: 1
May 13 23:22:00.601: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:22:01.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:22:01.627: INFO: rc: 1
May 13 23:22:01.627: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:22:02.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:22:02.864: INFO: rc: 1
May 13 23:22:02.864: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:22:03.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:22:03.646: INFO: rc: 1
May 13 23:22:03.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:22:04.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:22:04.718: INFO: rc: 1
May 13 23:22:04.718: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:22:05.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:22:05.678: INFO: rc: 1
May 13 23:22:05.678: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:22:06.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:22:06.688: INFO: rc: 1
May 13 23:22:06.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:22:07.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:22:07.724: INFO: rc: 1
May 13 23:22:07.724: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:22:08.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:22:08.868: INFO: rc: 1
May 13 23:22:08.868: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:22:09.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:22:10.315: INFO: rc: 1
May 13 23:22:10.315: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:22:10.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:22:10.598: INFO: rc: 1
May 13 23:22:10.598: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:22:11.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:22:11.668: INFO: rc: 1
May 13 23:22:11.668: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:22:12.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:22:12.854: INFO: rc: 1
May 13 23:22:12.854: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:22:13.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:22:13.835: INFO: rc: 1
May 13 23:22:13.835: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:22:13.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420'
May 13 23:22:14.097: INFO: rc: 1
May 13 23:22:14.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1389 exec execpodswfnp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31420:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31420
nc: connect to 10.10.190.207 port 31420 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 13 23:22:14.098: FAIL: Unexpected error:
    <*errors.errorString | 0xc003f96380>: {
        s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31420 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31420 over TCP protocol
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.13()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001681b00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001681b00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001681b00, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
May 13 23:22:14.099: INFO: Cleaning up the updating NodePorts test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-1389".
STEP: Found 17 events.
May 13 23:22:14.126: INFO: At 2022-05-13 23:19:56 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-d24hz
May 13 23:22:14.126: INFO: At 2022-05-13 23:19:56 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-k576q
May 13 23:22:14.126: INFO: At 2022-05-13 23:19:56 +0000 UTC - event for nodeport-update-service-d24hz: {default-scheduler } Scheduled: Successfully assigned services-1389/nodeport-update-service-d24hz to node1
May 13 23:22:14.126: INFO: At 2022-05-13 23:19:56 +0000 UTC - event for nodeport-update-service-k576q: {default-scheduler } Scheduled: Successfully assigned services-1389/nodeport-update-service-k576q to node1
May 13 23:22:14.126: INFO: At 2022-05-13 23:19:58 +0000 UTC - event for nodeport-update-service-d24hz: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 13 23:22:14.126: INFO: At 2022-05-13 23:19:59 +0000 UTC - event for nodeport-update-service-d24hz: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.241546372s
May 13 23:22:14.126: INFO: At 2022-05-13 23:19:59 +0000 UTC - event for nodeport-update-service-d24hz: {kubelet node1} Created: Created container nodeport-update-service
May 13 23:22:14.126: INFO: At 2022-05-13 23:19:59 +0000 UTC - event for nodeport-update-service-k576q: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 13 23:22:14.126: INFO: At 2022-05-13 23:20:00 +0000 UTC - event for nodeport-update-service-d24hz: {kubelet node1} Started: Started container nodeport-update-service
May 13 23:22:14.126: INFO: At 2022-05-13 23:20:00 +0000 UTC - event for nodeport-update-service-k576q: {kubelet node1} Created: Created container nodeport-update-service
May 13 23:22:14.126: INFO: At 2022-05-13 23:20:00 +0000 UTC - event for nodeport-update-service-k576q: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 827.743992ms
May 13 23:22:14.126: INFO: At 2022-05-13 23:20:02 +0000 UTC - event for nodeport-update-service-k576q: {kubelet node1} Started: Started container nodeport-update-service
May 13 23:22:14.126: INFO: At 2022-05-13 23:20:05 +0000 UTC - event for execpodswfnp: {default-scheduler } Scheduled: Successfully assigned services-1389/execpodswfnp to node2
May 13 23:22:14.126: INFO: At 2022-05-13 23:20:07 +0000 UTC - event for execpodswfnp: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 13 23:22:14.126: INFO: At 2022-05-13 23:20:08 +0000 UTC - event for execpodswfnp: {kubelet node2} Started: Started container agnhost-container
May 13 23:22:14.126: INFO: At 2022-05-13 23:20:08 +0000 UTC - event for execpodswfnp: {kubelet node2} Created: Created container agnhost-container
May 13 23:22:14.126: INFO: At 2022-05-13 23:20:08 +0000 UTC - event for execpodswfnp: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 363.157576ms
May 13 23:22:14.129: INFO: POD                            NODE   PHASE    GRACE  CONDITIONS
May 13 23:22:14.129: INFO: execpodswfnp                   node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:20:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:20:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:20:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:20:05 +0000 UTC  }]
May 13 23:22:14.129: INFO: nodeport-update-service-d24hz  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:19:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:20:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:20:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:19:56 +0000 UTC  }]
May 13 23:22:14.129: INFO: nodeport-update-service-k576q  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:19:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:20:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:20:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:19:56 +0000 UTC  }]
May 13 23:22:14.129: INFO: 
May 13 23:22:14.133: INFO: 
Logging node info for node master1
May 13 23:22:14.136: INFO: Node Info: &Node{ObjectMeta:{master1    e893469e-45f9-457b-9379-276178f6209f 75527 0 2022-05-13 19:57:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-13 19:57:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-13 19:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-13 20:05:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-13 20:09:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:06 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:06 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:06 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:22:06 +0000 UTC,LastTransitionTime:2022-05-13 20:03:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5bc4f1fb629f4c3bb455995355cca59c,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:196d75bb-273f-44bf-9b96-1cfef0d34445,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:22:14.137: INFO: 
Logging kubelet events for node master1
May 13 23:22:14.138: INFO: 
Logging pods the kubelet thinks is on node master1
May 13 23:22:14.158: INFO: kube-proxy-6q994 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.158: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:22:14.158: INFO: node-feature-discovery-controller-cff799f9f-k2qmv started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.158: INFO: 	Container nfd-controller ready: true, restart count 0
May 13 23:22:14.158: INFO: node-exporter-2jxfg started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:14.158: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:22:14.158: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:22:14.158: INFO: kube-flannel-jw4mp started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:22:14.158: INFO: 	Init container install-cni ready: true, restart count 2
May 13 23:22:14.158: INFO: 	Container kube-flannel ready: true, restart count 1
May 13 23:22:14.158: INFO: kube-multus-ds-amd64-ts4fz started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.158: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:22:14.158: INFO: container-registry-65d7c44b96-gqdgz started at 2022-05-13 20:05:09 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:14.158: INFO: 	Container docker-registry ready: true, restart count 0
May 13 23:22:14.158: INFO: 	Container nginx ready: true, restart count 0
May 13 23:22:14.158: INFO: kube-apiserver-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.158: INFO: 	Container kube-apiserver ready: true, restart count 0
May 13 23:22:14.158: INFO: kube-controller-manager-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.158: INFO: 	Container kube-controller-manager ready: true, restart count 2
May 13 23:22:14.158: INFO: kube-scheduler-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.158: INFO: 	Container kube-scheduler ready: true, restart count 0
May 13 23:22:14.250: INFO: 
Latency metrics for node master1
May 13 23:22:14.250: INFO: 
Logging node info for node master2
May 13 23:22:14.253: INFO: Node Info: &Node{ObjectMeta:{master2    6394fb00-7ac6-4b0d-af37-0e7baf892992 75524 0 2022-05-13 19:58:07 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-13 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:06 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:06 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:06 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:22:06 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0c26206724384f32848637ec210bf517,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:87b6bd6a-947f-4fda-a24f-503738da156e,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:22:14.254: INFO: 
Logging kubelet events for node master2
May 13 23:22:14.257: INFO: 
Logging pods the kubelet thinks is on node master2
May 13 23:22:14.265: INFO: kube-apiserver-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.265: INFO: 	Container kube-apiserver ready: true, restart count 0
May 13 23:22:14.265: INFO: kube-proxy-jxbwz started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.265: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:22:14.265: INFO: kube-flannel-gndff started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:22:14.265: INFO: 	Init container install-cni ready: true, restart count 2
May 13 23:22:14.265: INFO: 	Container kube-flannel ready: true, restart count 1
May 13 23:22:14.265: INFO: kube-multus-ds-amd64-w98wb started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.265: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:22:14.265: INFO: coredns-8474476ff8-m6b8s started at 2022-05-13 20:01:00 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.265: INFO: 	Container coredns ready: true, restart count 1
May 13 23:22:14.265: INFO: kube-controller-manager-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.266: INFO: 	Container kube-controller-manager ready: true, restart count 2
May 13 23:22:14.266: INFO: kube-scheduler-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.266: INFO: 	Container kube-scheduler ready: true, restart count 2
May 13 23:22:14.266: INFO: node-exporter-zmlpx started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:14.266: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:22:14.266: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:22:14.351: INFO: 
Latency metrics for node master2
May 13 23:22:14.351: INFO: 
Logging node info for node master3
May 13 23:22:14.353: INFO: Node Info: &Node{ObjectMeta:{master3    11a40d0b-d9d1-449f-a587-cc897edbfd9b 75810 0 2022-05-13 19:58:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-13 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:13 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:13 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:13 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:22:13 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:96fba609db464f479c06da20414d1979,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:55d995b3-c2cc-4b60-96f4-5a990abd0c48,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:22:14.353: INFO: 
Logging kubelet events for node master3
May 13 23:22:14.355: INFO: 
Logging pods the kubelet thinks is on node master3
May 13 23:22:14.365: INFO: kube-multus-ds-amd64-ffgk5 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.365: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:22:14.365: INFO: coredns-8474476ff8-x29nh started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.365: INFO: 	Container coredns ready: true, restart count 1
May 13 23:22:14.365: INFO: kube-apiserver-master3 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.365: INFO: 	Container kube-apiserver ready: true, restart count 0
May 13 23:22:14.365: INFO: kube-scheduler-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.365: INFO: 	Container kube-scheduler ready: true, restart count 2
May 13 23:22:14.365: INFO: kube-proxy-6fl99 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.365: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:22:14.365: INFO: kube-flannel-p5mwf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:22:14.365: INFO: 	Init container install-cni ready: true, restart count 0
May 13 23:22:14.365: INFO: 	Container kube-flannel ready: true, restart count 1
May 13 23:22:14.365: INFO: dns-autoscaler-7df78bfcfb-wfmpz started at 2022-05-13 20:01:02 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.365: INFO: 	Container autoscaler ready: true, restart count 1
May 13 23:22:14.365: INFO: node-exporter-qh76s started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:14.365: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:22:14.365: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:22:14.365: INFO: kube-controller-manager-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.365: INFO: 	Container kube-controller-manager ready: true, restart count 2
May 13 23:22:14.450: INFO: 
Latency metrics for node master3
May 13 23:22:14.450: INFO: 
Logging node info for node node1
May 13 23:22:14.453: INFO: Node Info: &Node{ObjectMeta:{node1    dca01e5e-a739-4ccc-b102-bfd163c4b832 75530 0 2022-05-13 19:59:24 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 22:26:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-05-13 23:04:48 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:20 +0000 UTC,LastTransitionTime:2022-05-13 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:07 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:07 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:07 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:22:07 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f73ea6ef9607468c91208265a5b02a1b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ff172cf5-ca8f-45aa-ade2-6dea8be1d249,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003949300,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:2c72b42c3679c1c819d46296c4e79e69b2616fa28bea92e61d358980e18c9751 nginx:latest],SizeBytes:141522805,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:22:14.454: INFO: 
Logging kubelet events for node node1
May 13 23:22:14.456: INFO: 
Logging pods the kubelet thinks is on node node1
May 13 23:22:14.471: INFO: collectd-p26j2 started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container collectd ready: true, restart count 0
May 13 23:22:14.471: INFO: 	Container collectd-exporter ready: true, restart count 0
May 13 23:22:14.471: INFO: 	Container rbac-proxy ready: true, restart count 0
May 13 23:22:14.471: INFO: kube-multus-ds-amd64-dtt2x started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:22:14.471: INFO: node-feature-discovery-worker-l459c started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container nfd-worker ready: true, restart count 0
May 13 23:22:14.471: INFO: node-exporter-42x8d started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:22:14.471: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:22:14.471: INFO: service-headless-zmq4r started at 2022-05-13 23:20:45 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container service-headless ready: true, restart count 0
May 13 23:22:14.471: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container kube-sriovdp ready: true, restart count 0
May 13 23:22:14.471: INFO: nodeport-update-service-d24hz started at 2022-05-13 23:19:56 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container nodeport-update-service ready: true, restart count 0
May 13 23:22:14.471: INFO: netserver-0 started at 2022-05-13 23:21:38 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container webserver ready: true, restart count 0
May 13 23:22:14.471: INFO: up-down-2-sr9jb started at 2022-05-13 23:21:29 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container up-down-2 ready: true, restart count 0
May 13 23:22:14.471: INFO: e2e-net-exec started at 2022-05-13 23:21:38 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container e2e-net-exec ready: true, restart count 0
May 13 23:22:14.471: INFO: kube-proxy-rs2zg started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:22:14.471: INFO: cmk-init-discover-node1-m2p59 started at 2022-05-13 20:12:33 +0000 UTC (0+3 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container discover ready: false, restart count 0
May 13 23:22:14.471: INFO: 	Container init ready: false, restart count 0
May 13 23:22:14.471: INFO: 	Container install ready: false, restart count 0
May 13 23:22:14.471: INFO: prometheus-k8s-0 started at 2022-05-13 20:14:32 +0000 UTC (0+4 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container config-reloader ready: true, restart count 0
May 13 23:22:14.471: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
May 13 23:22:14.471: INFO: 	Container grafana ready: true, restart count 0
May 13 23:22:14.471: INFO: 	Container prometheus ready: true, restart count 1
May 13 23:22:14.471: INFO: pod-client started at 2022-05-13 23:21:09 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container pod-client ready: true, restart count 0
May 13 23:22:14.471: INFO: netserver-0 started at 2022-05-13 23:21:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container webserver ready: true, restart count 0
May 13 23:22:14.471: INFO: nginx-proxy-node1 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container nginx-proxy ready: true, restart count 2
May 13 23:22:14.471: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 2
May 13 23:22:14.471: INFO: startup-script started at 2022-05-13 23:20:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container startup-script ready: true, restart count 0
May 13 23:22:14.471: INFO: kubernetes-dashboard-785dcbb76d-tcgth started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container kubernetes-dashboard ready: true, restart count 2
May 13 23:22:14.471: INFO: up-down-2-rr2wz started at 2022-05-13 23:21:29 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container up-down-2 ready: true, restart count 0
May 13 23:22:14.471: INFO: netserver-0 started at 2022-05-13 23:21:42 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container webserver ready: true, restart count 0
May 13 23:22:14.471: INFO: kube-flannel-xfj7m started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Init container install-cni ready: true, restart count 2
May 13 23:22:14.471: INFO: 	Container kube-flannel ready: true, restart count 2
May 13 23:22:14.471: INFO: cmk-tfblh started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container nodereport ready: true, restart count 0
May 13 23:22:14.471: INFO: 	Container reconcile ready: true, restart count 0
May 13 23:22:14.471: INFO: up-down-2-69k5s started at 2022-05-13 23:21:29 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container up-down-2 ready: true, restart count 0
May 13 23:22:14.471: INFO: service-headless-xflzh started at 2022-05-13 23:20:45 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container service-headless ready: true, restart count 0
May 13 23:22:14.471: INFO: test-container-pod started at 2022-05-13 23:22:02 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container webserver ready: true, restart count 0
May 13 23:22:14.471: INFO: cmk-webhook-6c9d5f8578-59hj6 started at 2022-05-13 20:13:16 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container cmk-webhook ready: true, restart count 0
May 13 23:22:14.471: INFO: nodeport-update-service-k576q started at 2022-05-13 23:19:56 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.471: INFO: 	Container nodeport-update-service ready: true, restart count 0
May 13 23:22:14.751: INFO: 
Latency metrics for node node1
May 13 23:22:14.751: INFO: 
Logging node info for node node2
May 13 23:22:14.754: INFO: Node Info: &Node{ObjectMeta:{node2    461ea6c2-df11-4be4-802e-29bddc0f2535 75782 0 2022-05-13 19:59:24 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 22:24:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:12 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:12 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:12 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:22:12 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b36a7c38429c4cc598bd0e6ca8278ad0,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:4fcc32fc-d037-4cf9-a62f-f372f6cc17cb,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[nginx@sha256:2c72b42c3679c1c819d46296c4e79e69b2616fa28bea92e61d358980e18c9751 nginx:latest],SizeBytes:141522805,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:22:14.755: INFO: 
Logging kubelet events for node node2
May 13 23:22:14.758: INFO: 
Logging pods the kubelet thinks is on node node2
May 13 23:22:14.773: INFO: kube-flannel-lv9xf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Init container install-cni ready: true, restart count 2
May 13 23:22:14.773: INFO: 	Container kube-flannel ready: true, restart count 2
May 13 23:22:14.773: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container kube-sriovdp ready: true, restart count 0
May 13 23:22:14.773: INFO: collectd-9gqhr started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container collectd ready: true, restart count 0
May 13 23:22:14.773: INFO: 	Container collectd-exporter ready: true, restart count 0
May 13 23:22:14.773: INFO: 	Container rbac-proxy ready: true, restart count 0
May 13 23:22:14.773: INFO: service-headless-toggled-hd9jd started at 2022-05-13 23:20:54 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container service-headless-toggled ready: true, restart count 0
May 13 23:22:14.773: INFO: cmk-qhbd6 started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container nodereport ready: true, restart count 0
May 13 23:22:14.773: INFO: 	Container reconcile ready: true, restart count 0
May 13 23:22:14.773: INFO: prometheus-operator-585ccfb458-vrwnp started at 2022-05-13 20:14:11 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:22:14.773: INFO: 	Container prometheus-operator ready: true, restart count 0
May 13 23:22:14.773: INFO: host-test-container-pod started at 2022-05-13 23:22:08 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container agnhost-container ready: true, restart count 0
May 13 23:22:14.773: INFO: verify-service-down-host-exec-pod started at 2022-05-13 23:22:12 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container agnhost-container ready: false, restart count 0
May 13 23:22:14.773: INFO: nginx-proxy-node2 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container nginx-proxy ready: true, restart count 2
May 13 23:22:14.773: INFO: kube-proxy-wkzbm started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:22:14.773: INFO: cmk-init-discover-node2-hm7r7 started at 2022-05-13 20:12:52 +0000 UTC (0+3 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container discover ready: false, restart count 0
May 13 23:22:14.773: INFO: 	Container init ready: false, restart count 0
May 13 23:22:14.773: INFO: 	Container install ready: false, restart count 0
May 13 23:22:14.773: INFO: service-headless-toggled-f72sm started at 2022-05-13 23:20:54 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container service-headless-toggled ready: true, restart count 0
May 13 23:22:14.773: INFO: service-headless-toggled-v8nlc started at 2022-05-13 23:20:54 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container service-headless-toggled ready: true, restart count 0
May 13 23:22:14.773: INFO: pod-server-1 started at 2022-05-13 23:21:15 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container agnhost-container ready: true, restart count 0
May 13 23:22:14.773: INFO: netserver-1 started at 2022-05-13 23:21:42 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container webserver ready: true, restart count 0
May 13 23:22:14.773: INFO: service-headless-kgn8r started at 2022-05-13 23:20:45 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container service-headless ready: true, restart count 0
May 13 23:22:14.773: INFO: node-exporter-n5snd started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:22:14.773: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:22:14.773: INFO: netserver-1 started at 2022-05-13 23:21:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container webserver ready: true, restart count 0
May 13 23:22:14.773: INFO: kube-multus-ds-amd64-l7nx2 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:22:14.773: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 started at 2022-05-13 20:17:23 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container tas-extender ready: true, restart count 0
May 13 23:22:14.773: INFO: test-container-pod started at 2022-05-13 23:22:04 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container webserver ready: true, restart count 0
May 13 23:22:14.773: INFO: execpodswfnp started at 2022-05-13 23:20:05 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container agnhost-container ready: true, restart count 0
May 13 23:22:14.773: INFO: node-feature-discovery-worker-cxxqf started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container nfd-worker ready: true, restart count 0
May 13 23:22:14.773: INFO: netserver-1 started at 2022-05-13 23:21:38 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container webserver ready: true, restart count 0
May 13 23:22:14.773: INFO: test-container-pod started at 2022-05-13 23:22:08 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:14.773: INFO: 	Container webserver ready: true, restart count 0
May 13 23:22:15.580: INFO: 
Latency metrics for node node2
May 13 23:22:15.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1389" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• Failure [139.268 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  May 13 23:22:14.098: Unexpected error:
      <*errors.errorString | 0xc003f96380>: {
          s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31420 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31420 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":0,"skipped":3,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
May 13 23:22:15.598: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:21:09.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
STEP: creating a UDP service svc-udp with type=NodePort in conntrack-1194
STEP: creating a client pod for probing the service svc-udp
May 13 23:21:09.046: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:11.051: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:13.052: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:15.050: INFO: The status of Pod pod-client is Running (Ready = true)
May 13 23:21:15.059: INFO: Pod client logs: Fri May 13 23:21:13 UTC 2022
Fri May 13 23:21:13 UTC 2022 Try: 1

Fri May 13 23:21:13 UTC 2022 Try: 2

Fri May 13 23:21:13 UTC 2022 Try: 3

Fri May 13 23:21:13 UTC 2022 Try: 4

Fri May 13 23:21:13 UTC 2022 Try: 5

Fri May 13 23:21:13 UTC 2022 Try: 6

Fri May 13 23:21:13 UTC 2022 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
May 13 23:21:15.072: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:17.076: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:19.077: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:21.075: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:23.078: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-1194 to expose endpoints map[pod-server-1:[80]]
May 13 23:21:23.090: INFO: successfully validated that service svc-udp in namespace conntrack-1194 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
May 13 23:22:23.123: INFO: Pod client logs: Fri May 13 23:21:13 UTC 2022
Fri May 13 23:21:13 UTC 2022 Try: 1

Fri May 13 23:21:13 UTC 2022 Try: 2

Fri May 13 23:21:13 UTC 2022 Try: 3

Fri May 13 23:21:13 UTC 2022 Try: 4

Fri May 13 23:21:13 UTC 2022 Try: 5

Fri May 13 23:21:13 UTC 2022 Try: 6

Fri May 13 23:21:13 UTC 2022 Try: 7

Fri May 13 23:21:18 UTC 2022 Try: 8

Fri May 13 23:21:18 UTC 2022 Try: 9

Fri May 13 23:21:18 UTC 2022 Try: 10

Fri May 13 23:21:18 UTC 2022 Try: 11

Fri May 13 23:21:18 UTC 2022 Try: 12

Fri May 13 23:21:18 UTC 2022 Try: 13

Fri May 13 23:21:23 UTC 2022 Try: 14

Fri May 13 23:21:23 UTC 2022 Try: 15

Fri May 13 23:21:23 UTC 2022 Try: 16

Fri May 13 23:21:23 UTC 2022 Try: 17

Fri May 13 23:21:23 UTC 2022 Try: 18

Fri May 13 23:21:23 UTC 2022 Try: 19

Fri May 13 23:21:28 UTC 2022 Try: 20

Fri May 13 23:21:28 UTC 2022 Try: 21

Fri May 13 23:21:28 UTC 2022 Try: 22

Fri May 13 23:21:28 UTC 2022 Try: 23

Fri May 13 23:21:28 UTC 2022 Try: 24

Fri May 13 23:21:28 UTC 2022 Try: 25

Fri May 13 23:21:33 UTC 2022 Try: 26

Fri May 13 23:21:33 UTC 2022 Try: 27

Fri May 13 23:21:33 UTC 2022 Try: 28

Fri May 13 23:21:33 UTC 2022 Try: 29

Fri May 13 23:21:33 UTC 2022 Try: 30

Fri May 13 23:21:33 UTC 2022 Try: 31

Fri May 13 23:21:38 UTC 2022 Try: 32

Fri May 13 23:21:38 UTC 2022 Try: 33

Fri May 13 23:21:38 UTC 2022 Try: 34

Fri May 13 23:21:38 UTC 2022 Try: 35

Fri May 13 23:21:38 UTC 2022 Try: 36

Fri May 13 23:21:38 UTC 2022 Try: 37

Fri May 13 23:21:43 UTC 2022 Try: 38

Fri May 13 23:21:43 UTC 2022 Try: 39

Fri May 13 23:21:43 UTC 2022 Try: 40

Fri May 13 23:21:43 UTC 2022 Try: 41

Fri May 13 23:21:43 UTC 2022 Try: 42

Fri May 13 23:21:43 UTC 2022 Try: 43

Fri May 13 23:21:48 UTC 2022 Try: 44

Fri May 13 23:21:48 UTC 2022 Try: 45

Fri May 13 23:21:48 UTC 2022 Try: 46

Fri May 13 23:21:48 UTC 2022 Try: 47

Fri May 13 23:21:48 UTC 2022 Try: 48

Fri May 13 23:21:48 UTC 2022 Try: 49

Fri May 13 23:21:53 UTC 2022 Try: 50

Fri May 13 23:21:53 UTC 2022 Try: 51

Fri May 13 23:21:53 UTC 2022 Try: 52

Fri May 13 23:21:53 UTC 2022 Try: 53

Fri May 13 23:21:53 UTC 2022 Try: 54

Fri May 13 23:21:53 UTC 2022 Try: 55

Fri May 13 23:21:58 UTC 2022 Try: 56

Fri May 13 23:21:58 UTC 2022 Try: 57

Fri May 13 23:21:58 UTC 2022 Try: 58

Fri May 13 23:21:58 UTC 2022 Try: 59

Fri May 13 23:21:58 UTC 2022 Try: 60

Fri May 13 23:21:58 UTC 2022 Try: 61

Fri May 13 23:22:03 UTC 2022 Try: 62

Fri May 13 23:22:03 UTC 2022 Try: 63

Fri May 13 23:22:03 UTC 2022 Try: 64

Fri May 13 23:22:03 UTC 2022 Try: 65

Fri May 13 23:22:03 UTC 2022 Try: 66

Fri May 13 23:22:03 UTC 2022 Try: 67

Fri May 13 23:22:08 UTC 2022 Try: 68

Fri May 13 23:22:08 UTC 2022 Try: 69

Fri May 13 23:22:08 UTC 2022 Try: 70

Fri May 13 23:22:08 UTC 2022 Try: 71

Fri May 13 23:22:08 UTC 2022 Try: 72

Fri May 13 23:22:08 UTC 2022 Try: 73

Fri May 13 23:22:13 UTC 2022 Try: 74

Fri May 13 23:22:13 UTC 2022 Try: 75

Fri May 13 23:22:13 UTC 2022 Try: 76

Fri May 13 23:22:13 UTC 2022 Try: 77

Fri May 13 23:22:13 UTC 2022 Try: 78

Fri May 13 23:22:13 UTC 2022 Try: 79

Fri May 13 23:22:18 UTC 2022 Try: 80

Fri May 13 23:22:18 UTC 2022 Try: 81

Fri May 13 23:22:18 UTC 2022 Try: 82

Fri May 13 23:22:18 UTC 2022 Try: 83

Fri May 13 23:22:18 UTC 2022 Try: 84

Fri May 13 23:22:18 UTC 2022 Try: 85

May 13 23:22:23.123: FAIL: Failed to connect to backend 1

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001403200)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001403200)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001403200, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "conntrack-1194".
STEP: Found 8 events.
May 13 23:22:23.127: INFO: At 2022-05-13 23:21:11 +0000 UTC - event for pod-client: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 13 23:22:23.127: INFO: At 2022-05-13 23:21:12 +0000 UTC - event for pod-client: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 291.841726ms
May 13 23:22:23.127: INFO: At 2022-05-13 23:21:12 +0000 UTC - event for pod-client: {kubelet node1} Created: Created container pod-client
May 13 23:22:23.127: INFO: At 2022-05-13 23:21:13 +0000 UTC - event for pod-client: {kubelet node1} Started: Started container pod-client
May 13 23:22:23.127: INFO: At 2022-05-13 23:21:20 +0000 UTC - event for pod-server-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 13 23:22:23.127: INFO: At 2022-05-13 23:21:21 +0000 UTC - event for pod-server-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 263.558677ms
May 13 23:22:23.127: INFO: At 2022-05-13 23:21:21 +0000 UTC - event for pod-server-1: {kubelet node2} Created: Created container agnhost-container
May 13 23:22:23.127: INFO: At 2022-05-13 23:21:22 +0000 UTC - event for pod-server-1: {kubelet node2} Started: Started container agnhost-container
May 13 23:22:23.130: INFO: POD           NODE   PHASE    GRACE  CONDITIONS
May 13 23:22:23.130: INFO: pod-client    node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:21:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:21:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:21:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:21:09 +0000 UTC  }]
May 13 23:22:23.130: INFO: pod-server-1  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:21:15 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:21:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:21:22 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:21:15 +0000 UTC  }]
May 13 23:22:23.130: INFO: 
May 13 23:22:23.134: INFO: 
Logging node info for node master1
May 13 23:22:23.137: INFO: Node Info: &Node{ObjectMeta:{master1    e893469e-45f9-457b-9379-276178f6209f 75874 0 2022-05-13 19:57:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-13 19:57:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-13 19:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-13 20:05:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-13 20:09:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:16 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:16 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:16 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:22:16 +0000 UTC,LastTransitionTime:2022-05-13 20:03:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5bc4f1fb629f4c3bb455995355cca59c,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:196d75bb-273f-44bf-9b96-1cfef0d34445,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:22:23.137: INFO: 
Logging kubelet events for node master1
May 13 23:22:23.139: INFO: 
Logging pods the kubelet thinks is on node master1
May 13 23:22:23.160: INFO: kube-flannel-jw4mp started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:22:23.160: INFO: 	Init container install-cni ready: true, restart count 2
May 13 23:22:23.160: INFO: 	Container kube-flannel ready: true, restart count 1
May 13 23:22:23.160: INFO: kube-multus-ds-amd64-ts4fz started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.160: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:22:23.160: INFO: container-registry-65d7c44b96-gqdgz started at 2022-05-13 20:05:09 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:23.160: INFO: 	Container docker-registry ready: true, restart count 0
May 13 23:22:23.160: INFO: 	Container nginx ready: true, restart count 0
May 13 23:22:23.160: INFO: kube-apiserver-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.160: INFO: 	Container kube-apiserver ready: true, restart count 0
May 13 23:22:23.160: INFO: kube-controller-manager-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.160: INFO: 	Container kube-controller-manager ready: true, restart count 2
May 13 23:22:23.160: INFO: kube-scheduler-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.160: INFO: 	Container kube-scheduler ready: true, restart count 0
May 13 23:22:23.160: INFO: kube-proxy-6q994 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.160: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:22:23.160: INFO: node-feature-discovery-controller-cff799f9f-k2qmv started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.160: INFO: 	Container nfd-controller ready: true, restart count 0
May 13 23:22:23.160: INFO: node-exporter-2jxfg started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:23.160: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:22:23.160: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:22:23.250: INFO: 
Latency metrics for node master1
May 13 23:22:23.250: INFO: 
Logging node info for node master2
May 13 23:22:23.253: INFO: Node Info: &Node{ObjectMeta:{master2    6394fb00-7ac6-4b0d-af37-0e7baf892992 75872 0 2022-05-13 19:58:07 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-13 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:16 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:16 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:16 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:22:16 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0c26206724384f32848637ec210bf517,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:87b6bd6a-947f-4fda-a24f-503738da156e,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:22:23.253: INFO: 
Logging kubelet events for node master2
May 13 23:22:23.255: INFO: 
Logging pods the kubelet thinks is on node master2
May 13 23:22:23.263: INFO: kube-controller-manager-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.263: INFO: 	Container kube-controller-manager ready: true, restart count 2
May 13 23:22:23.263: INFO: kube-scheduler-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.263: INFO: 	Container kube-scheduler ready: true, restart count 2
May 13 23:22:23.263: INFO: node-exporter-zmlpx started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:23.263: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:22:23.263: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:22:23.263: INFO: kube-multus-ds-amd64-w98wb started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.263: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:22:23.263: INFO: coredns-8474476ff8-m6b8s started at 2022-05-13 20:01:00 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.263: INFO: 	Container coredns ready: true, restart count 1
May 13 23:22:23.263: INFO: kube-apiserver-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.263: INFO: 	Container kube-apiserver ready: true, restart count 0
May 13 23:22:23.263: INFO: kube-proxy-jxbwz started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.263: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:22:23.263: INFO: kube-flannel-gndff started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:22:23.264: INFO: 	Init container install-cni ready: true, restart count 2
May 13 23:22:23.264: INFO: 	Container kube-flannel ready: true, restart count 1
May 13 23:22:23.356: INFO: 
Latency metrics for node master2
May 13 23:22:23.356: INFO: 
Logging node info for node master3
May 13 23:22:23.358: INFO: Node Info: &Node{ObjectMeta:{master3    11a40d0b-d9d1-449f-a587-cc897edbfd9b 75810 0 2022-05-13 19:58:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-13 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:13 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:13 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:13 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:22:13 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:96fba609db464f479c06da20414d1979,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:55d995b3-c2cc-4b60-96f4-5a990abd0c48,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:22:23.359: INFO: 
Logging kubelet events for node master3
May 13 23:22:23.361: INFO: 
Logging pods the kubelet thinks is on node master3
May 13 23:22:23.370: INFO: kube-multus-ds-amd64-ffgk5 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.370: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:22:23.370: INFO: coredns-8474476ff8-x29nh started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.370: INFO: 	Container coredns ready: true, restart count 1
May 13 23:22:23.370: INFO: kube-apiserver-master3 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.370: INFO: 	Container kube-apiserver ready: true, restart count 0
May 13 23:22:23.370: INFO: kube-scheduler-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.370: INFO: 	Container kube-scheduler ready: true, restart count 2
May 13 23:22:23.370: INFO: kube-proxy-6fl99 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.370: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:22:23.370: INFO: kube-flannel-p5mwf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:22:23.370: INFO: 	Init container install-cni ready: true, restart count 0
May 13 23:22:23.370: INFO: 	Container kube-flannel ready: true, restart count 1
May 13 23:22:23.370: INFO: dns-autoscaler-7df78bfcfb-wfmpz started at 2022-05-13 20:01:02 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.370: INFO: 	Container autoscaler ready: true, restart count 1
May 13 23:22:23.370: INFO: node-exporter-qh76s started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:23.370: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:22:23.370: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:22:23.370: INFO: kube-controller-manager-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.370: INFO: 	Container kube-controller-manager ready: true, restart count 2
May 13 23:22:23.450: INFO: 
Latency metrics for node master3
May 13 23:22:23.450: INFO: 
Logging node info for node node1
May 13 23:22:23.454: INFO: Node Info: &Node{ObjectMeta:{node1    dca01e5e-a739-4ccc-b102-bfd163c4b832 75877 0 2022-05-13 19:59:24 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 22:26:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-05-13 23:04:48 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:20 +0000 UTC,LastTransitionTime:2022-05-13 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:17 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:17 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:17 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:22:17 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f73ea6ef9607468c91208265a5b02a1b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ff172cf5-ca8f-45aa-ade2-6dea8be1d249,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003949300,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:2c72b42c3679c1c819d46296c4e79e69b2616fa28bea92e61d358980e18c9751 nginx:latest],SizeBytes:141522805,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:22:23.455: INFO: 
Logging kubelet events for node node1
May 13 23:22:23.457: INFO: 
Logging pods the kubelet thinks is on node node1
May 13 23:22:23.473: INFO: kubernetes-dashboard-785dcbb76d-tcgth started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container kubernetes-dashboard ready: true, restart count 2
May 13 23:22:23.473: INFO: up-down-2-rr2wz started at 2022-05-13 23:21:29 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container up-down-2 ready: true, restart count 0
May 13 23:22:23.473: INFO: up-down-2-69k5s started at 2022-05-13 23:21:29 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container up-down-2 ready: true, restart count 0
May 13 23:22:23.473: INFO: kube-flannel-xfj7m started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:22:23.473: INFO: 	Init container install-cni ready: true, restart count 2
May 13 23:22:23.473: INFO: 	Container kube-flannel ready: true, restart count 2
May 13 23:22:23.473: INFO: cmk-tfblh started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container nodereport ready: true, restart count 0
May 13 23:22:23.473: INFO: 	Container reconcile ready: true, restart count 0
May 13 23:22:23.473: INFO: cmk-webhook-6c9d5f8578-59hj6 started at 2022-05-13 20:13:16 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container cmk-webhook ready: true, restart count 0
May 13 23:22:23.473: INFO: collectd-p26j2 started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container collectd ready: true, restart count 0
May 13 23:22:23.473: INFO: 	Container collectd-exporter ready: true, restart count 0
May 13 23:22:23.473: INFO: 	Container rbac-proxy ready: true, restart count 0
May 13 23:22:23.473: INFO: node-exporter-42x8d started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:22:23.473: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:22:23.473: INFO: kube-multus-ds-amd64-dtt2x started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:22:23.473: INFO: node-feature-discovery-worker-l459c started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container nfd-worker ready: true, restart count 0
May 13 23:22:23.473: INFO: up-down-2-sr9jb started at 2022-05-13 23:21:29 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container up-down-2 ready: true, restart count 0
May 13 23:22:23.473: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container kube-sriovdp ready: true, restart count 0
May 13 23:22:23.473: INFO: prometheus-k8s-0 started at 2022-05-13 20:14:32 +0000 UTC (0+4 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container config-reloader ready: true, restart count 0
May 13 23:22:23.473: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
May 13 23:22:23.473: INFO: 	Container grafana ready: true, restart count 0
May 13 23:22:23.473: INFO: 	Container prometheus ready: true, restart count 1
May 13 23:22:23.473: INFO: pod-client started at 2022-05-13 23:21:09 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container pod-client ready: true, restart count 0
May 13 23:22:23.473: INFO: e2e-net-exec started at 2022-05-13 23:21:38 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container e2e-net-exec ready: true, restart count 0
May 13 23:22:23.473: INFO: kube-proxy-rs2zg started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:22:23.473: INFO: cmk-init-discover-node1-m2p59 started at 2022-05-13 20:12:33 +0000 UTC (0+3 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container discover ready: false, restart count 0
May 13 23:22:23.473: INFO: 	Container init ready: false, restart count 0
May 13 23:22:23.473: INFO: 	Container install ready: false, restart count 0
May 13 23:22:23.473: INFO: netserver-0 started at 2022-05-13 23:21:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container webserver ready: true, restart count 0
May 13 23:22:23.473: INFO: nginx-proxy-node1 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container nginx-proxy ready: true, restart count 2
May 13 23:22:23.473: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.473: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 2
May 13 23:22:23.807: INFO: 
Latency metrics for node node1
May 13 23:22:23.807: INFO: 
Logging node info for node node2
May 13 23:22:23.811: INFO: Node Info: &Node{ObjectMeta:{node2    461ea6c2-df11-4be4-802e-29bddc0f2535 75970 0 2022-05-13 19:59:24 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 22:24:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:22 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:22 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:22:22 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:22:22 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b36a7c38429c4cc598bd0e6ca8278ad0,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:4fcc32fc-d037-4cf9-a62f-f372f6cc17cb,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[nginx@sha256:2c72b42c3679c1c819d46296c4e79e69b2616fa28bea92e61d358980e18c9751 nginx:latest],SizeBytes:141522805,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:22:23.811: INFO: 
Logging kubelet events for node node2
May 13 23:22:23.814: INFO: 
Logging pods the kubelet thinks is on node node2
May 13 23:22:23.825: INFO: cmk-qhbd6 started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:23.825: INFO: 	Container nodereport ready: true, restart count 0
May 13 23:22:23.825: INFO: 	Container reconcile ready: true, restart count 0
May 13 23:22:23.825: INFO: prometheus-operator-585ccfb458-vrwnp started at 2022-05-13 20:14:11 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:23.825: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:22:23.825: INFO: 	Container prometheus-operator ready: true, restart count 0
May 13 23:22:23.825: INFO: host-test-container-pod started at 2022-05-13 23:22:08 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.825: INFO: 	Container agnhost-container ready: true, restart count 0
May 13 23:22:23.825: INFO: pod-server-1 started at 2022-05-13 23:21:15 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.825: INFO: 	Container agnhost-container ready: true, restart count 0
May 13 23:22:23.825: INFO: nginx-proxy-node2 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.825: INFO: 	Container nginx-proxy ready: true, restart count 2
May 13 23:22:23.825: INFO: kube-proxy-wkzbm started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.825: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:22:23.825: INFO: cmk-init-discover-node2-hm7r7 started at 2022-05-13 20:12:52 +0000 UTC (0+3 container statuses recorded)
May 13 23:22:23.825: INFO: 	Container discover ready: false, restart count 0
May 13 23:22:23.825: INFO: 	Container init ready: false, restart count 0
May 13 23:22:23.825: INFO: 	Container install ready: false, restart count 0
May 13 23:22:23.825: INFO: verify-service-up-host-exec-pod started at 2022-05-13 23:22:19 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.825: INFO: 	Container agnhost-container ready: true, restart count 0
May 13 23:22:23.825: INFO: node-exporter-n5snd started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:22:23.825: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:22:23.825: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:22:23.825: INFO: netserver-1 started at 2022-05-13 23:21:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.825: INFO: 	Container webserver ready: true, restart count 0
May 13 23:22:23.825: INFO: kube-multus-ds-amd64-l7nx2 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.825: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:22:23.825: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 started at 2022-05-13 20:17:23 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.825: INFO: 	Container tas-extender ready: true, restart count 0
May 13 23:22:23.825: INFO: node-feature-discovery-worker-cxxqf started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.825: INFO: 	Container nfd-worker ready: true, restart count 0
May 13 23:22:23.825: INFO: test-container-pod started at 2022-05-13 23:22:08 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.825: INFO: 	Container webserver ready: true, restart count 0
May 13 23:22:23.825: INFO: kube-flannel-lv9xf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:22:23.825: INFO: 	Init container install-cni ready: true, restart count 2
May 13 23:22:23.825: INFO: 	Container kube-flannel ready: true, restart count 2
May 13 23:22:23.825: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded)
May 13 23:22:23.825: INFO: 	Container kube-sriovdp ready: true, restart count 0
May 13 23:22:23.825: INFO: collectd-9gqhr started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded)
May 13 23:22:23.825: INFO: 	Container collectd ready: true, restart count 0
May 13 23:22:23.825: INFO: 	Container collectd-exporter ready: true, restart count 0
May 13 23:22:23.825: INFO: 	Container rbac-proxy ready: true, restart count 0
May 13 23:22:24.025: INFO: 
Latency metrics for node node2
May 13 23:22:24.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-1194" for this suite.


• Failure [75.035 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130

  May 13 23:22:23.123: Failed to connect to backend 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":3,"skipped":777,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}
May 13 23:22:24.040: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:21:23.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
STEP: creating up-down-1 in namespace services-1082
STEP: creating service up-down-1 in namespace services-1082
STEP: creating replication controller up-down-1 in namespace services-1082
I0513 23:21:23.846292      30 runners.go:190] Created replication controller with name: up-down-1, namespace: services-1082, replica count: 3
I0513 23:21:26.897653      30 runners.go:190] up-down-1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0513 23:21:29.898765      30 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating up-down-2 in namespace services-1082
STEP: creating service up-down-2 in namespace services-1082
STEP: creating replication controller up-down-2 in namespace services-1082
I0513 23:21:29.912669      30 runners.go:190] Created replication controller with name: up-down-2, namespace: services-1082, replica count: 3
I0513 23:21:32.964699      30 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0513 23:21:35.964947      30 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-1 is up
May 13 23:21:35.967: INFO: Creating new host exec pod
May 13 23:21:35.984: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:37.989: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 13 23:21:37.989: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 13 23:21:46.005: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.24.42:80 2>&1 || true; echo; done" in pod services-1082/verify-service-up-host-exec-pod
May 13 23:21:46.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1082 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.24.42:80 2>&1 || true; echo; done'
May 13 23:21:46.365: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n"
May 13 23:21:46.366: INFO: stdout: "up-down-1-w4djp\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-w4djp\n"
May 13 23:21:46.366: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.24.42:80 2>&1 || true; echo; done" in pod services-1082/verify-service-up-exec-pod-xxzwb
May 13 23:21:46.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1082 exec verify-service-up-exec-pod-xxzwb -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.24.42:80 2>&1 || true; echo; done'
May 13 23:21:46.915: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.24.42:80\n+ echo\n"
May 13 23:21:46.916: INFO: stdout: "up-down-1-fql58\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-fql58\nup-down-1-fql58\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-w4djp\nup-down-1-w4djp\nup-down-1-s25ql\nup-down-1-s25ql\nup-down-1-fql58\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1082
STEP: Deleting pod verify-service-up-exec-pod-xxzwb in namespace services-1082
STEP: verifying service up-down-2 is up
May 13 23:21:46.928: INFO: Creating new host exec pod
May 13 23:21:46.939: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:48.943: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:50.943: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:52.942: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:54.942: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 13 23:21:54.942: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 13 23:22:02.962: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.125:80 2>&1 || true; echo; done" in pod services-1082/verify-service-up-host-exec-pod
May 13 23:22:02.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1082 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.125:80 2>&1 || true; echo; done'
May 13 23:22:03.342: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n"
May 13 23:22:03.342: INFO: stdout: "up-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\n"
May 13 23:22:03.343: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.125:80 2>&1 || true; echo; done" in pod services-1082/verify-service-up-exec-pod-p5tsr
May 13 23:22:03.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1082 exec verify-service-up-exec-pod-p5tsr -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.125:80 2>&1 || true; echo; done'
May 13 23:22:03.719: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n"
May 13 23:22:03.719: INFO: stdout: "up-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1082
STEP: Deleting pod verify-service-up-exec-pod-p5tsr in namespace services-1082
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-1082, will wait for the garbage collector to delete the pods
May 13 23:22:03.788: INFO: Deleting ReplicationController up-down-1 took: 4.0753ms
May 13 23:22:03.889: INFO: Terminating ReplicationController up-down-1 pods took: 100.955627ms
STEP: verifying service up-down-1 is not up
May 13 23:22:12.700: INFO: Creating new host exec pod
May 13 23:22:12.712: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:14.716: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:16.716: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May 13 23:22:16.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1082 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.24.42:80 && echo service-down-failed'
May 13 23:22:18.982: INFO: rc: 28
May 13 23:22:18.982: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.24.42:80 && echo service-down-failed" in pod services-1082/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1082 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.24.42:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.24.42:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1082
STEP: verifying service up-down-2 is still up
May 13 23:22:18.993: INFO: Creating new host exec pod
May 13 23:22:19.005: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:21.010: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:23.012: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:25.009: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 13 23:22:25.009: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 13 23:22:29.031: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.125:80 2>&1 || true; echo; done" in pod services-1082/verify-service-up-host-exec-pod
May 13 23:22:29.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1082 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.125:80 2>&1 || true; echo; done'
May 13 23:22:29.550: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n"
May 13 23:22:29.550: INFO: stdout: "up-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\n"
May 13 23:22:29.550: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.125:80 2>&1 || true; echo; done" in pod services-1082/verify-service-up-exec-pod-gb97l
May 13 23:22:29.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1082 exec verify-service-up-exec-pod-gb97l -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.125:80 2>&1 || true; echo; done'
May 13 23:22:30.063: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n"
May 13 23:22:30.064: INFO: stdout: "up-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1082
STEP: Deleting pod verify-service-up-exec-pod-gb97l in namespace services-1082
STEP: creating service up-down-3 in namespace services-1082
STEP: creating service up-down-3 in namespace services-1082
STEP: creating replication controller up-down-3 in namespace services-1082
I0513 23:22:30.087157      30 runners.go:190] Created replication controller with name: up-down-3, namespace: services-1082, replica count: 3
I0513 23:22:33.139051      30 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0513 23:22:36.140017      30 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-2 is still up
May 13 23:22:36.142: INFO: Creating new host exec pod
May 13 23:22:36.153: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:38.159: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 13 23:22:38.159: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 13 23:22:42.179: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.125:80 2>&1 || true; echo; done" in pod services-1082/verify-service-up-host-exec-pod
May 13 23:22:42.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1082 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.125:80 2>&1 || true; echo; done'
May 13 23:22:42.600: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n"
May 13 23:22:42.600: INFO: stdout: "up-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\n"
May 13 23:22:42.600: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.125:80 2>&1 || true; echo; done" in pod services-1082/verify-service-up-exec-pod-qzqp9
May 13 23:22:42.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1082 exec verify-service-up-exec-pod-qzqp9 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.125:80 2>&1 || true; echo; done'
May 13 23:22:43.011: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.125:80\n+ echo\n"
May 13 23:22:43.012: INFO: stdout: "up-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-rr2wz\nup-down-2-sr9jb\nup-down-2-69k5s\nup-down-2-69k5s\nup-down-2-sr9jb\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1082
STEP: Deleting pod verify-service-up-exec-pod-qzqp9 in namespace services-1082
STEP: verifying service up-down-3 is up
May 13 23:22:43.028: INFO: Creating new host exec pod
May 13 23:22:43.042: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:45.045: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:47.047: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:49.048: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:51.046: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:53.051: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:55.046: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:57.048: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:22:59.048: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 13 23:23:01.045: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 13 23:23:01.045: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 13 23:23:05.066: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.62.54:80 2>&1 || true; echo; done" in pod services-1082/verify-service-up-host-exec-pod
May 13 23:23:05.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1082 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.62.54:80 2>&1 || true; echo; done'
May 13 23:23:05.439: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n"
May 13 23:23:05.439: INFO: stdout: "up-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-p9bdk\n"
May 13 23:23:05.439: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.62.54:80 2>&1 || true; echo; done" in pod services-1082/verify-service-up-exec-pod-xzzbz
May 13 23:23:05.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1082 exec verify-service-up-exec-pod-xzzbz -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.62.54:80 2>&1 || true; echo; done'
May 13 23:23:05.784: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.62.54:80\n+ echo\n"
May 13 23:23:05.784: INFO: stdout: "up-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-cj5lr\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-p9bdk\nup-down-3-dmp27\nup-down-3-cj5lr\nup-down-3-dmp27\nup-down-3-p9bdk\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1082
STEP: Deleting pod verify-service-up-exec-pod-xzzbz in namespace services-1082
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 13 23:23:05.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1082" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:101.993 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":1,"skipped":373,"failed":0}
May 13 23:23:05.814: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 13 23:21:45.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
STEP: Performing setup for networking test in namespace nettest-930
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 13 23:21:46.014: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 13 23:21:46.046: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:48.051: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 13 23:21:50.050: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:52.051: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:54.050: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:56.049: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:21:58.053: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:22:00.051: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:22:02.050: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:22:04.048: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:22:06.049: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 13 23:22:08.052: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 13 23:22:08.057: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 13 23:22:16.089: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 13 23:22:16.089: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
May 13 23:22:16.108: INFO: Service node-port-service in namespace nettest-930 found.
May 13 23:22:16.121: INFO: Service session-affinity-service in namespace nettest-930 found.
STEP: Waiting for NodePort service to expose endpoint
May 13 23:22:17.124: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
May 13 23:22:18.127: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) 10.10.190.207 (node) --> 10.10.190.207:31972 (nodeIP) and getting ALL host endpoints
May 13 23:22:18.130: INFO: Going to poll 10.10.190.207 on port 31972 at least 0 times, with a maximum of 34 tries before failing
May 13 23:22:18.132: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:18.132: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:18.235: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:18.235: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:20.239: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:20.239: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:20.372: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:20.372: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:22.375: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:22.375: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:22.513: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:22.513: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:24.517: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:24.517: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:24.605: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:24.605: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:26.610: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:26.610: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:26.693: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:26.693: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:28.699: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:28.699: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:28.781: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:28.782: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:30.788: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:30.788: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:30.888: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:30.888: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:32.892: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:32.892: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:33.243: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:33.243: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:35.246: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:35.246: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:35.328: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:35.329: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:37.333: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:37.333: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:37.424: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:37.424: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:39.427: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:39.427: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:40.154: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:40.154: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:42.160: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:42.161: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:42.246: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:42.246: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:44.255: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:44.255: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:44.342: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:44.342: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:46.346: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:46.346: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:46.445: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:46.445: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:48.449: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:48.449: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:48.529: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:48.529: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:50.534: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:50.534: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:50.621: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:50.621: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:52.625: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:52.625: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:52.708: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:52.708: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:54.712: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:54.712: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:54.797: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:54.797: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:56.801: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:56.801: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:56.886: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:56.886: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:22:58.890: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:22:58.890: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:22:59.010: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:22:59.010: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:23:01.014: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:23:01.014: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:23:01.098: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:23:01.098: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:23:03.109: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:23:03.109: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:23:03.220: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:23:03.220: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:23:05.223: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:23:05.223: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:23:05.309: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:23:05.309: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:23:07.317: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:23:07.317: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:23:07.403: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:23:07.403: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:23:09.406: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:23:09.406: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:23:09.645: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:23:09.645: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:23:11.650: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:23:11.650: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:23:11.747: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:23:11.747: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:23:13.751: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:23:13.751: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:23:13.897: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:23:13.898: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:23:15.902: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:23:15.902: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:23:15.980: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:23:15.981: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:23:17.985: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:23:17.985: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:23:18.072: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:23:18.072: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:23:20.078: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:23:20.078: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:23:20.180: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:23:20.180: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:23:22.184: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:23:22.184: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:23:22.267: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:23:22.267: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:23:24.272: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:23:24.272: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:23:24.350: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:23:24.350: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:23:26.355: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:23:26.355: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:23:26.458: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:23:26.458: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:23:28.465: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\s*$'] Namespace:nettest-930 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 13 23:23:28.465: INFO: >>> kubeConfig: /root/.kube/config
May 13 23:23:28.544: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 13 23:23:28.544: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 13 23:23:30.546: INFO: 
Output of kubectl describe pod nettest-930/netserver-0:

May 13 23:23:30.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-930 describe pod netserver-0 --namespace=nettest-930'
May 13 23:23:30.743: INFO: stderr: ""
May 13 23:23:30.743: INFO: stdout: "Name:         netserver-0\nNamespace:    nettest-930\nPriority:     0\nNode:         node1/10.10.190.207\nStart Time:   Fri, 13 May 2022 23:21:46 +0000\nLabels:       selector-200c834c-829f-48f9-9706-beae11be872a=true\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.187\"\n                    ],\n                    \"mac\": \"4a:ee:7b:ed:30:f1\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.187\"\n                    ],\n                    \"mac\": \"4a:ee:7b:ed:30:f1\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.3.187\nIPs:\n  IP:  10.244.3.187\nContainers:\n  webserver:\n    Container ID:  docker://8f7508debe8b27bc27fdccad5d4227f4b6c0210b8dbcb25c7f3ce5f483a786a3\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Fri, 13 May 2022 23:21:49 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qkhlp (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-qkhlp:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node1\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  104s  default-scheduler  Successfully assigned nettest-930/netserver-0 to node1\n  Normal  Pulling    102s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     102s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 286.775013ms\n  Normal  Created    102s  kubelet            Created container webserver\n  Normal  Started    101s  kubelet            Started container webserver\n"
May 13 23:23:30.743: INFO: Name:         netserver-0
Namespace:    nettest-930
Priority:     0
Node:         node1/10.10.190.207
Start Time:   Fri, 13 May 2022 23:21:46 +0000
Labels:       selector-200c834c-829f-48f9-9706-beae11be872a=true
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.3.187"
                    ],
                    "mac": "4a:ee:7b:ed:30:f1",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.3.187"
                    ],
                    "mac": "4a:ee:7b:ed:30:f1",
                    "default": true,
                    "dns": {}
                }]
              kubernetes.io/psp: collectd
Status:       Running
IP:           10.244.3.187
IPs:
  IP:  10.244.3.187
Containers:
  webserver:
    Container ID:  docker://8f7508debe8b27bc27fdccad5d4227f4b6c0210b8dbcb25c7f3ce5f483a786a3
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    0/TCP, 0/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
    State:          Running
      Started:      Fri, 13 May 2022 23:21:49 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qkhlp (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-qkhlp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node1
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  104s  default-scheduler  Successfully assigned nettest-930/netserver-0 to node1
  Normal  Pulling    102s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     102s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 286.775013ms
  Normal  Created    102s  kubelet            Created container webserver
  Normal  Started    101s  kubelet            Started container webserver

May 13 23:23:30.743: INFO: 
Output of kubectl describe pod nettest-930/netserver-1:

May 13 23:23:30.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-930 describe pod netserver-1 --namespace=nettest-930'
May 13 23:23:30.942: INFO: stderr: ""
May 13 23:23:30.942: INFO: stdout: "Name:         netserver-1\nNamespace:    nettest-930\nPriority:     0\nNode:         node2/10.10.190.208\nStart Time:   Fri, 13 May 2022 23:21:46 +0000\nLabels:       selector-200c834c-829f-48f9-9706-beae11be872a=true\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.76\"\n                    ],\n                    \"mac\": \"c2:31:4e:2a:d9:87\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.76\"\n                    ],\n                    \"mac\": \"c2:31:4e:2a:d9:87\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.4.76\nIPs:\n  IP:  10.244.4.76\nContainers:\n  webserver:\n    Container ID:  docker://c5a123eb20b17264781fff44949c31df51fea11d372c412544fdc7c8e392e037\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Fri, 13 May 2022 23:21:51 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-js2rl (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-js2rl:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node2\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  104s  default-scheduler  Successfully assigned nettest-930/netserver-1 to node2\n  Normal  Pulling    101s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     100s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 273.639387ms\n  Normal  Created    100s  kubelet            Created container webserver\n  Normal  Started    99s   kubelet            Started container webserver\n"
May 13 23:23:30.942: INFO: Name:         netserver-1
Namespace:    nettest-930
Priority:     0
Node:         node2/10.10.190.208
Start Time:   Fri, 13 May 2022 23:21:46 +0000
Labels:       selector-200c834c-829f-48f9-9706-beae11be872a=true
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.4.76"
                    ],
                    "mac": "c2:31:4e:2a:d9:87",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.4.76"
                    ],
                    "mac": "c2:31:4e:2a:d9:87",
                    "default": true,
                    "dns": {}
                }]
              kubernetes.io/psp: collectd
Status:       Running
IP:           10.244.4.76
IPs:
  IP:  10.244.4.76
Containers:
  webserver:
    Container ID:  docker://c5a123eb20b17264781fff44949c31df51fea11d372c412544fdc7c8e392e037
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    0/TCP, 0/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
    State:          Running
      Started:      Fri, 13 May 2022 23:21:51 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-js2rl (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-js2rl:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node2
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  104s  default-scheduler  Successfully assigned nettest-930/netserver-1 to node2
  Normal  Pulling    101s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     100s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 273.639387ms
  Normal  Created    100s  kubelet            Created container webserver
  Normal  Started    99s   kubelet            Started container webserver

May 13 23:23:30.943: FAIL: Error dialing http from node: failed to find expected endpoints, 
tries 34
Command curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName
retrieved map[]
expected map[netserver-0:{} netserver-1:{}]

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0004abe00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc0004abe00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc0004abe00, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "nettest-930".
STEP: Found 20 events.
May 13 23:23:30.949: INFO: At 2022-05-13 23:21:46 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-930/netserver-0 to node1
May 13 23:23:30.949: INFO: At 2022-05-13 23:21:46 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-930/netserver-1 to node2
May 13 23:23:30.949: INFO: At 2022-05-13 23:21:48 +0000 UTC - event for netserver-0: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 286.775013ms
May 13 23:23:30.949: INFO: At 2022-05-13 23:21:48 +0000 UTC - event for netserver-0: {kubelet node1} Created: Created container webserver
May 13 23:23:30.949: INFO: At 2022-05-13 23:21:48 +0000 UTC - event for netserver-0: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 13 23:23:30.949: INFO: At 2022-05-13 23:21:49 +0000 UTC - event for netserver-0: {kubelet node1} Started: Started container webserver
May 13 23:23:30.949: INFO: At 2022-05-13 23:21:49 +0000 UTC - event for netserver-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 13 23:23:30.949: INFO: At 2022-05-13 23:21:50 +0000 UTC - event for netserver-1: {kubelet node2} Created: Created container webserver
May 13 23:23:30.949: INFO: At 2022-05-13 23:21:50 +0000 UTC - event for netserver-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 273.639387ms
May 13 23:23:30.950: INFO: At 2022-05-13 23:21:51 +0000 UTC - event for netserver-1: {kubelet node2} Started: Started container webserver
May 13 23:23:30.950: INFO: At 2022-05-13 23:22:08 +0000 UTC - event for host-test-container-pod: {default-scheduler } Scheduled: Successfully assigned nettest-930/host-test-container-pod to node2
May 13 23:23:30.950: INFO: At 2022-05-13 23:22:08 +0000 UTC - event for test-container-pod: {default-scheduler } Scheduled: Successfully assigned nettest-930/test-container-pod to node2
May 13 23:23:30.950: INFO: At 2022-05-13 23:22:10 +0000 UTC - event for host-test-container-pod: {kubelet node2} Created: Created container agnhost-container
May 13 23:23:30.950: INFO: At 2022-05-13 23:22:10 +0000 UTC - event for host-test-container-pod: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 277.201741ms
May 13 23:23:30.950: INFO: At 2022-05-13 23:22:10 +0000 UTC - event for host-test-container-pod: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 13 23:23:30.950: INFO: At 2022-05-13 23:22:11 +0000 UTC - event for host-test-container-pod: {kubelet node2} Started: Started container agnhost-container
May 13 23:23:30.950: INFO: At 2022-05-13 23:22:11 +0000 UTC - event for test-container-pod: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 13 23:23:30.950: INFO: At 2022-05-13 23:22:12 +0000 UTC - event for test-container-pod: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 381.179131ms
May 13 23:23:30.950: INFO: At 2022-05-13 23:22:12 +0000 UTC - event for test-container-pod: {kubelet node2} Created: Created container webserver
May 13 23:23:30.950: INFO: At 2022-05-13 23:22:12 +0000 UTC - event for test-container-pod: {kubelet node2} Started: Started container webserver
May 13 23:23:30.954: INFO: POD                      NODE   PHASE    GRACE  CONDITIONS
May 13 23:23:30.954: INFO: host-test-container-pod  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:22:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:22:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:22:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:22:08 +0000 UTC  }]
May 13 23:23:30.954: INFO: netserver-0              node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:21:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:22:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:22:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:21:46 +0000 UTC  }]
May 13 23:23:30.954: INFO: netserver-1              node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:21:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:22:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:22:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:21:46 +0000 UTC  }]
May 13 23:23:30.954: INFO: test-container-pod       node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:22:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:22:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:22:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-13 23:22:08 +0000 UTC  }]
May 13 23:23:30.954: INFO: 
May 13 23:23:30.959: INFO: 
Logging node info for node master1
May 13 23:23:30.961: INFO: Node Info: &Node{ObjectMeta:{master1    e893469e-45f9-457b-9379-276178f6209f 76482 0 2022-05-13 19:57:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-13 19:57:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-13 19:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-13 20:05:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-13 20:09:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:23:26 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:23:26 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:23:26 +0000 UTC,LastTransitionTime:2022-05-13 19:57:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:23:26 +0000 UTC,LastTransitionTime:2022-05-13 20:03:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5bc4f1fb629f4c3bb455995355cca59c,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:196d75bb-273f-44bf-9b96-1cfef0d34445,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:23:30.962: INFO: 
Logging kubelet events for node master1
May 13 23:23:30.965: INFO: 
Logging pods the kubelet thinks is on node master1
May 13 23:23:30.974: INFO: kube-proxy-6q994 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:30.974: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:23:30.974: INFO: node-feature-discovery-controller-cff799f9f-k2qmv started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:30.974: INFO: 	Container nfd-controller ready: true, restart count 0
May 13 23:23:30.974: INFO: node-exporter-2jxfg started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:23:30.974: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:23:30.974: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:23:30.974: INFO: kube-apiserver-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:30.974: INFO: 	Container kube-apiserver ready: true, restart count 0
May 13 23:23:30.974: INFO: kube-controller-manager-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:30.974: INFO: 	Container kube-controller-manager ready: true, restart count 2
May 13 23:23:30.974: INFO: kube-scheduler-master1 started at 2022-05-13 20:03:13 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:30.974: INFO: 	Container kube-scheduler ready: true, restart count 0
May 13 23:23:30.974: INFO: kube-flannel-jw4mp started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:23:30.974: INFO: 	Init container install-cni ready: true, restart count 2
May 13 23:23:30.974: INFO: 	Container kube-flannel ready: true, restart count 1
May 13 23:23:30.974: INFO: kube-multus-ds-amd64-ts4fz started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:30.974: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:23:30.974: INFO: container-registry-65d7c44b96-gqdgz started at 2022-05-13 20:05:09 +0000 UTC (0+2 container statuses recorded)
May 13 23:23:30.974: INFO: 	Container docker-registry ready: true, restart count 0
May 13 23:23:30.974: INFO: 	Container nginx ready: true, restart count 0
May 13 23:23:31.057: INFO: 
Latency metrics for node master1
May 13 23:23:31.057: INFO: 
Logging node info for node master2
May 13 23:23:31.060: INFO: Node Info: &Node{ObjectMeta:{master2    6394fb00-7ac6-4b0d-af37-0e7baf892992 76480 0 2022-05-13 19:58:07 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-13 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:23:26 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:23:26 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:23:26 +0000 UTC,LastTransitionTime:2022-05-13 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:23:26 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0c26206724384f32848637ec210bf517,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:87b6bd6a-947f-4fda-a24f-503738da156e,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:23:31.060: INFO: 
Logging kubelet events for node master2
May 13 23:23:31.062: INFO: 
Logging pods the kubelet thinks is on node master2
May 13 23:23:31.071: INFO: kube-multus-ds-amd64-w98wb started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.071: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:23:31.071: INFO: coredns-8474476ff8-m6b8s started at 2022-05-13 20:01:00 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.071: INFO: 	Container coredns ready: true, restart count 1
May 13 23:23:31.071: INFO: kube-apiserver-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.071: INFO: 	Container kube-apiserver ready: true, restart count 0
May 13 23:23:31.071: INFO: kube-proxy-jxbwz started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.071: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:23:31.071: INFO: kube-flannel-gndff started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:23:31.071: INFO: 	Init container install-cni ready: true, restart count 2
May 13 23:23:31.071: INFO: 	Container kube-flannel ready: true, restart count 1
May 13 23:23:31.071: INFO: kube-controller-manager-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.071: INFO: 	Container kube-controller-manager ready: true, restart count 2
May 13 23:23:31.071: INFO: kube-scheduler-master2 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.071: INFO: 	Container kube-scheduler ready: true, restart count 2
May 13 23:23:31.071: INFO: node-exporter-zmlpx started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:23:31.071: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:23:31.071: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:23:31.144: INFO: 
Latency metrics for node master2
May 13 23:23:31.144: INFO: 
Logging node info for node master3
May 13 23:23:31.147: INFO: Node Info: &Node{ObjectMeta:{master3    11a40d0b-d9d1-449f-a587-cc897edbfd9b 76472 0 2022-05-13 19:58:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-13 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-13 20:00:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-13 20:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:24 +0000 UTC,LastTransitionTime:2022-05-13 20:03:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:23:23 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:23:23 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:23:23 +0000 UTC,LastTransitionTime:2022-05-13 19:58:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:23:23 +0000 UTC,LastTransitionTime:2022-05-13 20:00:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:96fba609db464f479c06da20414d1979,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:55d995b3-c2cc-4b60-96f4-5a990abd0c48,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:23:31.147: INFO: 
Logging kubelet events for node master3
May 13 23:23:31.149: INFO: 
Logging pods the kubelet thinks is on node master3
May 13 23:23:31.158: INFO: kube-apiserver-master3 started at 2022-05-13 19:58:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.158: INFO: 	Container kube-apiserver ready: true, restart count 0
May 13 23:23:31.158: INFO: kube-multus-ds-amd64-ffgk5 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.158: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:23:31.158: INFO: coredns-8474476ff8-x29nh started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.158: INFO: 	Container coredns ready: true, restart count 1
May 13 23:23:31.158: INFO: kube-controller-manager-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.158: INFO: 	Container kube-controller-manager ready: true, restart count 2
May 13 23:23:31.159: INFO: kube-scheduler-master3 started at 2022-05-13 20:07:22 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.159: INFO: 	Container kube-scheduler ready: true, restart count 2
May 13 23:23:31.159: INFO: kube-proxy-6fl99 started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.159: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:23:31.159: INFO: kube-flannel-p5mwf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:23:31.159: INFO: 	Init container install-cni ready: true, restart count 0
May 13 23:23:31.159: INFO: 	Container kube-flannel ready: true, restart count 1
May 13 23:23:31.159: INFO: dns-autoscaler-7df78bfcfb-wfmpz started at 2022-05-13 20:01:02 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.159: INFO: 	Container autoscaler ready: true, restart count 1
May 13 23:23:31.159: INFO: node-exporter-qh76s started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:23:31.159: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:23:31.159: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:23:31.249: INFO: 
Latency metrics for node master3
May 13 23:23:31.249: INFO: 
Logging node info for node node1
May 13 23:23:31.252: INFO: Node Info: &Node{ObjectMeta:{node1    dca01e5e-a739-4ccc-b102-bfd163c4b832 76484 0 2022-05-13 19:59:24 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 22:26:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-05-13 23:04:48 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:20 +0000 UTC,LastTransitionTime:2022-05-13 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:23:27 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:23:27 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:23:27 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:23:27 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f73ea6ef9607468c91208265a5b02a1b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ff172cf5-ca8f-45aa-ade2-6dea8be1d249,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003949300,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:2c72b42c3679c1c819d46296c4e79e69b2616fa28bea92e61d358980e18c9751 nginx:latest],SizeBytes:141522805,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:23:31.253: INFO: 
Logging kubelet events for node node1
May 13 23:23:31.255: INFO: 
Logging pods the kubelet thinks is on node node1
May 13 23:23:31.272: INFO: kubernetes-dashboard-785dcbb76d-tcgth started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.272: INFO: 	Container kubernetes-dashboard ready: true, restart count 2
May 13 23:23:31.272: INFO: kube-flannel-xfj7m started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:23:31.272: INFO: 	Init container install-cni ready: true, restart count 2
May 13 23:23:31.272: INFO: 	Container kube-flannel ready: true, restart count 2
May 13 23:23:31.272: INFO: cmk-tfblh started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded)
May 13 23:23:31.272: INFO: 	Container nodereport ready: true, restart count 0
May 13 23:23:31.272: INFO: 	Container reconcile ready: true, restart count 0
May 13 23:23:31.272: INFO: cmk-webhook-6c9d5f8578-59hj6 started at 2022-05-13 20:13:16 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.273: INFO: 	Container cmk-webhook ready: true, restart count 0
May 13 23:23:31.273: INFO: collectd-p26j2 started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded)
May 13 23:23:31.273: INFO: 	Container collectd ready: true, restart count 0
May 13 23:23:31.273: INFO: 	Container collectd-exporter ready: true, restart count 0
May 13 23:23:31.273: INFO: 	Container rbac-proxy ready: true, restart count 0
May 13 23:23:31.273: INFO: node-exporter-42x8d started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:23:31.273: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:23:31.273: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:23:31.273: INFO: kube-multus-ds-amd64-dtt2x started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.273: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:23:31.273: INFO: node-feature-discovery-worker-l459c started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.273: INFO: 	Container nfd-worker ready: true, restart count 0
May 13 23:23:31.273: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.273: INFO: 	Container kube-sriovdp ready: true, restart count 0
May 13 23:23:31.273: INFO: prometheus-k8s-0 started at 2022-05-13 20:14:32 +0000 UTC (0+4 container statuses recorded)
May 13 23:23:31.273: INFO: 	Container config-reloader ready: true, restart count 0
May 13 23:23:31.273: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
May 13 23:23:31.273: INFO: 	Container grafana ready: true, restart count 0
May 13 23:23:31.273: INFO: 	Container prometheus ready: true, restart count 1
May 13 23:23:31.273: INFO: kube-proxy-rs2zg started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.273: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:23:31.273: INFO: cmk-init-discover-node1-m2p59 started at 2022-05-13 20:12:33 +0000 UTC (0+3 container statuses recorded)
May 13 23:23:31.273: INFO: 	Container discover ready: false, restart count 0
May 13 23:23:31.273: INFO: 	Container init ready: false, restart count 0
May 13 23:23:31.273: INFO: 	Container install ready: false, restart count 0
May 13 23:23:31.273: INFO: netserver-0 started at 2022-05-13 23:21:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.273: INFO: 	Container webserver ready: true, restart count 0
May 13 23:23:31.273: INFO: nginx-proxy-node1 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.273: INFO: 	Container nginx-proxy ready: true, restart count 2
May 13 23:23:31.273: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v started at 2022-05-13 20:01:04 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.273: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 2
May 13 23:23:31.419: INFO: 
Latency metrics for node node1
May 13 23:23:31.419: INFO: 
Logging node info for node node2
May 13 23:23:31.422: INFO: Node Info: &Node{ObjectMeta:{node2    461ea6c2-df11-4be4-802e-29bddc0f2535 76468 0 2022-05-13 19:59:24 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-13 19:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-13 20:00:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-13 20:09:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-13 20:12:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-13 22:24:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-13 20:03:19 +0000 UTC,LastTransitionTime:2022-05-13 20:03:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-13 23:23:23 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-13 23:23:23 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-13 23:23:23 +0000 UTC,LastTransitionTime:2022-05-13 19:59:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-13 23:23:23 +0000 UTC,LastTransitionTime:2022-05-13 20:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b36a7c38429c4cc598bd0e6ca8278ad0,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:4fcc32fc-d037-4cf9-a62f-f372f6cc17cb,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d8398bd2fbf57c3876ac16f34cb433ab2d1f188395698e4c4bc72d7a927b936 localhost:30500/cmk:v1.5.1],SizeBytes:727676786,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[nginx@sha256:2c72b42c3679c1c819d46296c4e79e69b2616fa28bea92e61d358980e18c9751 nginx:latest],SizeBytes:141522805,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:4830fe72047e95e1bc06e239fa0e30d5670d0e445a3b5319414e1b7499278d28 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bf97fdc7070a276c4987a283c7fc0a94b6ed19c359c015730163a24fed45f2b3 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 13 23:23:31.423: INFO: 
Logging kubelet events for node node2
May 13 23:23:31.425: INFO: 
Logging pods the kubelet thinks is on node node2
May 13 23:23:31.437: INFO: node-exporter-n5snd started at 2022-05-13 20:14:18 +0000 UTC (0+2 container statuses recorded)
May 13 23:23:31.437: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:23:31.437: INFO: 	Container node-exporter ready: true, restart count 0
May 13 23:23:31.437: INFO: netserver-1 started at 2022-05-13 23:21:46 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.437: INFO: 	Container webserver ready: true, restart count 0
May 13 23:23:31.437: INFO: kube-multus-ds-amd64-l7nx2 started at 2022-05-13 20:00:33 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.437: INFO: 	Container kube-multus ready: true, restart count 1
May 13 23:23:31.437: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 started at 2022-05-13 20:17:23 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.437: INFO: 	Container tas-extender ready: true, restart count 0
May 13 23:23:31.437: INFO: node-feature-discovery-worker-cxxqf started at 2022-05-13 20:08:58 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.437: INFO: 	Container nfd-worker ready: true, restart count 0
May 13 23:23:31.437: INFO: test-container-pod started at 2022-05-13 23:22:08 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.437: INFO: 	Container webserver ready: true, restart count 0
May 13 23:23:31.437: INFO: kube-flannel-lv9xf started at 2022-05-13 20:00:24 +0000 UTC (1+1 container statuses recorded)
May 13 23:23:31.437: INFO: 	Init container install-cni ready: true, restart count 2
May 13 23:23:31.437: INFO: 	Container kube-flannel ready: true, restart count 2
May 13 23:23:31.437: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt started at 2022-05-13 20:10:11 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.437: INFO: 	Container kube-sriovdp ready: true, restart count 0
May 13 23:23:31.437: INFO: collectd-9gqhr started at 2022-05-13 20:18:14 +0000 UTC (0+3 container statuses recorded)
May 13 23:23:31.437: INFO: 	Container collectd ready: true, restart count 0
May 13 23:23:31.437: INFO: 	Container collectd-exporter ready: true, restart count 0
May 13 23:23:31.437: INFO: 	Container rbac-proxy ready: true, restart count 0
May 13 23:23:31.438: INFO: cmk-qhbd6 started at 2022-05-13 20:13:15 +0000 UTC (0+2 container statuses recorded)
May 13 23:23:31.438: INFO: 	Container nodereport ready: true, restart count 0
May 13 23:23:31.438: INFO: 	Container reconcile ready: true, restart count 0
May 13 23:23:31.438: INFO: prometheus-operator-585ccfb458-vrwnp started at 2022-05-13 20:14:11 +0000 UTC (0+2 container statuses recorded)
May 13 23:23:31.438: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 13 23:23:31.438: INFO: 	Container prometheus-operator ready: true, restart count 0
May 13 23:23:31.438: INFO: host-test-container-pod started at 2022-05-13 23:22:08 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.438: INFO: 	Container agnhost-container ready: true, restart count 0
May 13 23:23:31.438: INFO: nginx-proxy-node2 started at 2022-05-13 19:59:24 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.438: INFO: 	Container nginx-proxy ready: true, restart count 2
May 13 23:23:31.438: INFO: kube-proxy-wkzbm started at 2022-05-13 19:59:27 +0000 UTC (0+1 container statuses recorded)
May 13 23:23:31.438: INFO: 	Container kube-proxy ready: true, restart count 2
May 13 23:23:31.438: INFO: cmk-init-discover-node2-hm7r7 started at 2022-05-13 20:12:52 +0000 UTC (0+3 container statuses recorded)
May 13 23:23:31.438: INFO: 	Container discover ready: false, restart count 0
May 13 23:23:31.438: INFO: 	Container init ready: false, restart count 0
May 13 23:23:31.438: INFO: 	Container install ready: false, restart count 0
May 13 23:23:31.570: INFO: 
Latency metrics for node node2
May 13 23:23:31.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-930" for this suite.


• Failure [105.701 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: http [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369

    May 13 23:23:30.943: Error dialing http from node: failed to find expected endpoints, 
    tries 34
    Command curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31972/hostName
    retrieved map[]
    expected map[netserver-0:{} netserver-1:{}]

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]","total":-1,"completed":1,"skipped":191,"failed":2,"failures":["[sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp","[sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]"]}
May 13 23:23:31.588: INFO: Running AfterSuite actions on all nodes
May 13 23:23:31.588: INFO: Running AfterSuite actions on node 1
May 13 23:23:31.588: INFO: Skipping dumping logs from cluster



Summarizing 4 Failures:

[Fail] [sig-network] Networking Granular Checks: Services [It] should function for endpoint-Service: udp 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

[Fail] [sig-network] Conntrack [It] should be able to preserve UDP traffic when server pod cycles for a NodePort service 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Networking Granular Checks: Services [It] should update nodePort: http [Slow] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

Ran 28 of 5773 Specs in 215.516 seconds
FAIL! -- 24 Passed | 4 Failed | 0 Pending | 5745 Skipped


Ginkgo ran 1 suite in 3m37.271664742s
Test Suite Failed