Running Suite: Kubernetes e2e suite =================================== Random Seed: 1653088953 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes May 20 23:22:34.879: INFO: >>> kubeConfig: /root/.kube/config May 20 23:22:34.884: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 20 23:22:34.908: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 20 23:22:34.972: INFO: The status of Pod cmk-init-discover-node1-vkzkd is Succeeded, skipping waiting May 20 23:22:34.972: INFO: The status of Pod cmk-init-discover-node2-b7gw4 is Succeeded, skipping waiting May 20 23:22:34.972: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 20 23:22:34.972: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 20 23:22:34.972: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 20 23:22:34.990: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 20 23:22:34.990: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 20 23:22:34.990: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 20 23:22:34.990: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 20 23:22:34.990: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 20 23:22:34.990: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 20 23:22:34.990: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 20 23:22:34.990: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 20 23:22:34.990: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 20 23:22:34.990: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 20 23:22:34.990: INFO: e2e test version: v1.21.9 May 20 23:22:34.991: INFO: kube-apiserver version: v1.21.1 May 20 23:22:34.991: INFO: >>> kubeConfig: /root/.kube/config May 20 23:22:34.996: INFO: Cluster IP family: ipv4 May 20 23:22:34.999: INFO: >>> kubeConfig: /root/.kube/config May 20 23:22:35.019: INFO: Cluster IP family: ipv4 May 20 23:22:35.001: INFO: >>> kubeConfig: /root/.kube/config May 20 23:22:35.022: INFO: Cluster IP family: ipv4 May 20 23:22:35.010: INFO: >>> kubeConfig: /root/.kube/config May 20 23:22:35.031: INFO: Cluster IP family: ipv4 May 20 23:22:35.015: INFO: >>> kubeConfig: /root/.kube/config May 20 23:22:35.033: INFO: Cluster IP family: ipv4 May 20 23:22:35.018: INFO: >>> kubeConfig: /root/.kube/config May 20 23:22:35.038: INFO: Cluster IP family: ipv4 May 20 23:22:35.029: INFO: >>> kubeConfig: /root/.kube/config May 20 23:22:35.050: INFO: Cluster IP family: ipv4 May 20 23:22:35.032: INFO: >>> kubeConfig: /root/.kube/config May 20 23:22:35.053: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 20 23:22:35.041: INFO: >>> kubeConfig: /root/.kube/config May 20 23:22:35.061: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:22:35.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns W0520 23:22:35.067464 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:22:35.067: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:22:35.071: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Provider:GCE] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 May 20 23:22:35.073: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:22:35.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6068" for this suite. S [SKIPPING] [0.042 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Provider:GCE] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69 ------------------------------ May 20 23:22:35.062: INFO: >>> kubeConfig: /root/.kube/config May 20 23:22:35.083: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:22:35.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename firewall-test W0520 23:22:35.393010 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:22:35.393: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:22:35.394: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61 May 20 23:22:35.396: INFO: Only supported for providers [gce] (not local) [AfterEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:22:35.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "firewall-test-2388" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have correct firewall rules for e2e cluster [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204 Only supported for providers [gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:22:35.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 May 20 23:22:35.534: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:22:35.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-4987" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.040 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should work for type=NodePort [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:22:35.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 May 20 23:22:35.736: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:22:35.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-8665" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should work from pods [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Netpol API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:22:35.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename netpol W0520 23:22:35.750186 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:22:35.750: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:22:35.752: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating NetworkPolicy API operations /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 20 23:22:35.775: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 20 23:22:35.778: INFO: starting watch STEP: patching STEP: updating May 20 23:22:35.786: INFO: waiting for watch events with expected annotations May 20 23:22:35.786: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"} May 20 23:22:35.786: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Netpol API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:22:35.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "netpol-6784" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":1,"skipped":262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:22:35.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W0520 23:22:35.423447 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:22:35.423: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:22:35.425: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be rejected when no endpoints exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968 STEP: creating a service with no endpoints STEP: creating execpod-noendpoints on node node1 May 20 23:22:35.439: INFO: Creating new exec pod May 20 23:22:41.455: INFO: waiting up to 30s to connect to no-pods:80 STEP: hitting service no-pods:80 from pod execpod-noendpoints on node node1 May 20 23:22:41.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-437 exec execpod-noendpointsqprv8 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80' May 20 23:22:43.290: INFO: rc: 1 May 20 23:22:43.290: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-437 exec execpod-noendpointsqprv8 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80: Command stdout: stderr: + /agnhost connect '--timeout=3s' no-pods:80 REFUSED command terminated with exit code 1 error: exit status 1 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:22:43.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-437" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:7.897 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be rejected when no endpoints exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968 ------------------------------ {"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":1,"skipped":103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:22:43.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should provide Internet connection for containers [Feature:Networking-IPv4] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97 STEP: Running container which tries to connect to 8.8.8.8 May 20 23:22:43.555: INFO: Waiting up to 5m0s for pod "connectivity-test" in namespace "nettest-131" to be "Succeeded or Failed" May 20 23:22:43.557: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.540862ms May 20 23:22:45.562: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006934831s May 20 23:22:47.567: INFO: Pod "connectivity-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011824121s STEP: Saw pod success May 20 23:22:47.567: INFO: Pod "connectivity-test" satisfied condition "Succeeded or Failed" [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:22:47.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-131" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]","total":-1,"completed":2,"skipped":149,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:22:47.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename firewall-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61 May 20 23:22:47.613: INFO: Only supported for providers [gce] (not local) [AfterEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:22:47.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "firewall-test-5617" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 control plane should not expose well-known ports [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214 Only supported for providers [gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:22:35.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod resolv.conf /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458 STEP: Preparing a test DNS service with injected DNS names... May 20 23:22:35.377: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-128fa637-c45f-474d-afc4-2b5aa9fb1a27 dns-943 a0a6f6a4-608e-45a4-aec9-4db534d13479 72413 0 2022-05-20 23:22:35 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-05-20 23:22:35 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-6pkgq,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-924v7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-924v7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:22:45.387: INFO: testServerIP is 10.244.3.251 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 20 23:22:45.398: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils dns-943 9866161d-93ba-4bbc-8e7c-46177329378a 72711 0 2022-05-20 23:22:45 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-05-20 23:22:45 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fll4b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fll4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.3.251],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS option is configured on pod... May 20 23:22:51.406: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-943 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:22:51.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized name server and search path are working... May 20 23:22:51.844: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-943 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:22:51.844: INFO: >>> kubeConfig: /root/.kube/config May 20 23:22:51.935: INFO: Deleting pod e2e-dns-utils... May 20 23:22:51.940: INFO: Deleting pod e2e-configmap-dns-server-128fa637-c45f-474d-afc4-2b5aa9fb1a27... May 20 23:22:51.948: INFO: Deleting configmap e2e-coredns-configmap-6pkgq... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:22:51.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-943" for this suite. • [SLOW TEST:16.617 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should support configurable pod resolv.conf /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":1,"skipped":73,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:22:35.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W0520 23:22:35.405041 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:22:35.405: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:22:35.407: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should update endpoints: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351 STEP: Performing setup for networking test in namespace nettest-7196 STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 23:22:35.517: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 23:22:35.550: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:37.552: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:39.554: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:41.554: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:43.553: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:45.553: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:47.559: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:49.553: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:51.554: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:53.553: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:55.553: INFO: The status of Pod netserver-0 is Running (Ready = true) May 20 23:22:55.558: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 20 23:23:01.579: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses May 20 23:23:01.579: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 23:23:01.587: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:23:01.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-7196" for this suite. S [SKIPPING] [26.215 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should update endpoints: udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:22:35.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W0520 23:22:35.454367 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:22:35.454: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:22:35.456: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for pod-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168 STEP: Performing setup for networking test in namespace nettest-1362 STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 23:22:35.635: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 23:22:35.664: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:37.670: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:39.670: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:41.669: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:43.668: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:45.668: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:47.668: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:49.669: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:51.669: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:53.667: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:55.667: INFO: The status of Pod netserver-0 is Running (Ready = true) May 20 23:22:55.671: INFO: The status of Pod netserver-1 is Running (Ready = false) May 20 23:22:57.676: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 20 23:23:01.697: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses May 20 23:23:01.697: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 23:23:01.703: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:23:01.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-1362" for this suite. S [SKIPPING] [26.281 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for pod-Service: udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:22:35.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W0520 23:22:35.272324 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:22:35.272: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:22:35.274: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should check kube-proxy urls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138 STEP: Performing setup for networking test in namespace nettest-8440 STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 23:22:35.387: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 23:22:35.418: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:37.424: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:39.423: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:41.423: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:43.421: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:45.422: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:47.423: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:49.423: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:51.427: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:53.421: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:55.423: INFO: The status of Pod netserver-0 is Running (Ready = true) May 20 23:22:55.427: INFO: The status of Pod netserver-1 is Running (Ready = false) May 20 23:22:57.436: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 20 23:23:03.476: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses May 20 23:23:03.476: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 23:23:03.485: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:23:03.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-8440" for this suite. S [SKIPPING] [28.248 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should check kube-proxy urls [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:22:35.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W0520 23:22:35.350303 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:22:35.350: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:22:35.352: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should update endpoints: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334 STEP: Performing setup for networking test in namespace nettest-8956 STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 23:22:35.461: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 23:22:35.493: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:37.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:39.501: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:41.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:43.497: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:45.498: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:47.501: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:49.498: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:51.501: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:53.497: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:55.498: INFO: The status of Pod netserver-0 is Running (Ready = true) May 20 23:22:55.503: INFO: The status of Pod netserver-1 is Running (Ready = false) May 20 23:22:57.508: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 20 23:23:05.528: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses May 20 23:23:05.528: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 23:23:05.536: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:23:05.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-8956" for this suite. S [SKIPPING] [30.221 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should update endpoints: http [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:22:35.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W0520 23:22:35.552339 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:22:35.552: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:22:35.554: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should support basic nodePort: udp functionality /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387 STEP: Performing setup for networking test in namespace nettest-3473 STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 23:22:35.668: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 23:22:35.699: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:37.702: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:39.704: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:41.703: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:43.702: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:45.703: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:22:47.703: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:49.704: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:51.703: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:53.703: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:55.704: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:22:57.704: INFO: The status of Pod netserver-0 is Running (Ready = true) May 20 23:22:57.710: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 20 23:23:05.743: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses May 20 23:23:05.743: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 23:23:05.750: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:23:05.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-3473" for this suite. S [SKIPPING] [30.229 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should support basic nodePort: udp functionality [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:23:05.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85 May 20 23:23:05.761: INFO: (0) /api/v1/nodes/node2:10250/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212
STEP: Performing setup for networking test in namespace nettest-7927
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 20 23:22:36.140: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:22:36.171: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:38.175: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:40.174: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:42.176: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:44.176: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:46.178: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:48.177: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:22:50.175: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:22:52.175: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:22:54.176: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:22:56.177: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:22:58.178: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 20 23:22:58.183: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 20 23:23:06.216: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 20 23:23:06.216: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:23:06.223: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:06.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7927" for this suite.


S [SKIPPING] [30.230 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:04.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1029.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1029.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1029.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1029.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1029.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1029.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 20 23:23:14.135: INFO: DNS probes using dns-1029/dns-test-de50879f-1bd3-410a-8831-96f1ce739af0 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:14.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1029" for this suite.


• [SLOW TEST:10.114 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":1,"skipped":354,"failed":0}

SSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:22:47.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for multiple endpoint-Services with same selector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289
STEP: Performing setup for networking test in namespace nettest-9448
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 20 23:22:47.898: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:22:47.929: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:49.933: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:51.935: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:22:53.934: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:22:55.934: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:22:57.935: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:22:59.934: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:01.934: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:03.933: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:05.933: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:07.937: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:09.933: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:11.932: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:13.933: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 20 23:23:13.937: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 20 23:23:23.959: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 20 23:23:23.959: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:23:23.966: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:23.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9448" for this suite.


S [SKIPPING] [36.188 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for multiple endpoint-Services with same selector [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:22:52.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
STEP: Performing setup for networking test in namespace nettest-5069
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 20 23:22:52.827: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:22:52.861: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:54.865: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:56.866: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:22:58.866: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:00.868: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:02.867: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:04.864: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:06.866: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:08.865: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:10.864: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:12.871: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:14.865: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:16.868: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:18.866: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 20 23:23:18.871: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 20 23:23:26.914: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 20 23:23:26.914: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:23:26.922: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:26.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5069" for this suite.


S [SKIPPING] [34.245 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: http [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:01.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256
STEP: Performing setup for networking test in namespace nettest-713
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 20 23:23:01.807: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:23:01.841: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:03.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:05.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:07.846: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:09.843: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:11.848: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:13.845: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:15.845: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:17.847: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:19.844: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:21.845: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:23.845: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:25.846: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 20 23:23:25.851: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 20 23:23:29.875: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 20 23:23:29.875: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:23:29.883: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:29.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-713" for this suite.


S [SKIPPING] [28.203 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:05.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242
STEP: Performing setup for networking test in namespace nettest-699
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 20 23:23:05.982: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:23:06.012: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:08.016: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:10.016: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:12.017: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:14.016: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:16.016: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:18.018: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:20.016: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:22.017: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:24.016: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:26.017: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:28.018: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 20 23:23:28.024: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 20 23:23:36.045: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 20 23:23:36.045: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:23:36.052: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:36.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-699" for this suite.


S [SKIPPING] [30.190 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSS
------------------------------
[BeforeEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:36.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename networkpolicies
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
May 20 23:23:36.127: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
May 20 23:23:36.132: INFO: starting watch
STEP: patching
STEP: updating
May 20 23:23:36.144: INFO: waiting for watch events with expected annotations
May 20 23:23:36.144: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
May 20 23:23:36.144: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:36.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-7116" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":2,"skipped":190,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:02.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
STEP: Performing setup for networking test in namespace nettest-4684
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 20 23:23:02.267: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:23:02.299: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:04.303: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:06.303: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:08.303: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:10.304: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:12.307: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:14.302: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:16.304: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:18.302: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:20.303: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:22.305: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:24.303: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:26.305: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:28.303: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 20 23:23:28.308: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 20 23:23:36.331: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 20 23:23:36.331: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:23:36.338: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:36.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4684" for this suite.


S [SKIPPING] [34.191 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:36.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should check NodePort out-of-range
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494
STEP: creating service nodeport-range-test with type NodePort in namespace services-2292
STEP: changing service nodeport-range-test to out-of-range NodePort 58536
STEP: deleting original service nodeport-range-test
STEP: creating service nodeport-range-test with out-of-range NodePort 58536
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:36.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2292" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":1,"skipped":440,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:06.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for service endpoints using hostNetwork
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474
STEP: Performing setup for networking test in namespace nettest-5935
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 20 23:23:06.409: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:23:06.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:08.458: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:10.459: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:12.461: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:14.459: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:16.460: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:18.459: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:20.460: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:22.461: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:24.459: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:26.459: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:28.460: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:30.459: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 20 23:23:30.464: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 20 23:23:40.495: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 20 23:23:40.495: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:23:40.502: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:40.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5935" for this suite.


S [SKIPPING] [34.208 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for service endpoints using hostNetwork [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:14.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: http [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416
STEP: Performing setup for networking test in namespace nettest-3947
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 20 23:23:14.305: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:23:14.336: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:16.339: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:18.339: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:20.339: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:22.341: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:24.339: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:26.340: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:28.339: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:30.339: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:32.341: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:34.341: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:36.340: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 20 23:23:36.345: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 20 23:23:46.368: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 20 23:23:46.368: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:23:46.376: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:46.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3947" for this suite.


S [SKIPPING] [32.192 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: http [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:40.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
STEP: creating service nodeport-reuse with type NodePort in namespace services-584
STEP: deleting original service nodeport-reuse
May 20 23:23:40.894: INFO: Creating new host exec pod
May 20 23:23:40.914: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:42.917: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:44.918: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:46.919: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:48.917: INFO: The status of Pod hostexec is Running (Ready = true)
May 20 23:23:48.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-584 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :31890' | tail -n +2 | grep LISTEN'
May 20 23:23:49.447: INFO: stderr: "+ ss -ant46 'sport = :31890'\n+ tail -n +2\n+ grep LISTEN\n"
May 20 23:23:49.448: INFO: stdout: ""
STEP: creating service nodeport-reuse with same NodePort 31890
STEP: deleting service nodeport-reuse in namespace services-584
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:49.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-584" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:8.633 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":2,"skipped":577,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:50.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112
STEP: testing: /healthz
STEP: testing: /api
STEP: testing: /apis
STEP: testing: /metrics
STEP: testing: /openapi/v2
STEP: testing: /version
STEP: testing: /logs
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:50.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9184" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":3,"skipped":866,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:36.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename no-snat-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
STEP: creating a test pod on each Node
STEP: waiting for all of the no-snat-test pods to be scheduled and running
STEP: sending traffic from each pod to the others and checking that SNAT does not occur
May 20 23:23:46.614: INFO: Waiting up to 2m0s to get response from 10.244.0.7:8080
May 20 23:23:46.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-test2ph7h -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip'
May 20 23:23:46.863: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip\n"
May 20 23:23:46.863: INFO: stdout: "10.244.2.5:59688"
STEP: Verifying the preserved source ip
May 20 23:23:46.863: INFO: Waiting up to 2m0s to get response from 10.244.1.7:8080
May 20 23:23:46.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-test2ph7h -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.7:8080/clientip'
May 20 23:23:47.102: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.7:8080/clientip\n"
May 20 23:23:47.102: INFO: stdout: "10.244.2.5:59790"
STEP: Verifying the preserved source ip
May 20 23:23:47.102: INFO: Waiting up to 2m0s to get response from 10.244.3.31:8080
May 20 23:23:47.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-test2ph7h -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.31:8080/clientip'
May 20 23:23:47.362: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.31:8080/clientip\n"
May 20 23:23:47.362: INFO: stdout: "10.244.2.5:48432"
STEP: Verifying the preserved source ip
May 20 23:23:47.362: INFO: Waiting up to 2m0s to get response from 10.244.4.202:8080
May 20 23:23:47.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-test2ph7h -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.202:8080/clientip'
May 20 23:23:47.620: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.202:8080/clientip\n"
May 20 23:23:47.620: INFO: stdout: "10.244.2.5:41206"
STEP: Verifying the preserved source ip
May 20 23:23:47.620: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
May 20 23:23:47.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-testdmq26 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
May 20 23:23:47.874: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
May 20 23:23:47.874: INFO: stdout: "10.244.0.7:47712"
STEP: Verifying the preserved source ip
May 20 23:23:47.874: INFO: Waiting up to 2m0s to get response from 10.244.1.7:8080
May 20 23:23:47.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-testdmq26 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.7:8080/clientip'
May 20 23:23:48.127: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.7:8080/clientip\n"
May 20 23:23:48.127: INFO: stdout: "10.244.0.7:54020"
STEP: Verifying the preserved source ip
May 20 23:23:48.127: INFO: Waiting up to 2m0s to get response from 10.244.3.31:8080
May 20 23:23:48.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-testdmq26 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.31:8080/clientip'
May 20 23:23:48.376: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.31:8080/clientip\n"
May 20 23:23:48.376: INFO: stdout: "10.244.0.7:35136"
STEP: Verifying the preserved source ip
May 20 23:23:48.376: INFO: Waiting up to 2m0s to get response from 10.244.4.202:8080
May 20 23:23:48.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-testdmq26 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.202:8080/clientip'
May 20 23:23:48.621: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.202:8080/clientip\n"
May 20 23:23:48.621: INFO: stdout: "10.244.0.7:50198"
STEP: Verifying the preserved source ip
May 20 23:23:48.621: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
May 20 23:23:48.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-testkgxnt -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
May 20 23:23:48.866: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
May 20 23:23:48.866: INFO: stdout: "10.244.1.7:46126"
STEP: Verifying the preserved source ip
May 20 23:23:48.866: INFO: Waiting up to 2m0s to get response from 10.244.0.7:8080
May 20 23:23:48.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-testkgxnt -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip'
May 20 23:23:49.108: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip\n"
May 20 23:23:49.108: INFO: stdout: "10.244.1.7:47768"
STEP: Verifying the preserved source ip
May 20 23:23:49.108: INFO: Waiting up to 2m0s to get response from 10.244.3.31:8080
May 20 23:23:49.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-testkgxnt -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.31:8080/clientip'
May 20 23:23:49.340: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.31:8080/clientip\n"
May 20 23:23:49.340: INFO: stdout: "10.244.1.7:55952"
STEP: Verifying the preserved source ip
May 20 23:23:49.340: INFO: Waiting up to 2m0s to get response from 10.244.4.202:8080
May 20 23:23:49.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-testkgxnt -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.202:8080/clientip'
May 20 23:23:49.582: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.202:8080/clientip\n"
May 20 23:23:49.582: INFO: stdout: "10.244.1.7:53062"
STEP: Verifying the preserved source ip
May 20 23:23:49.582: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
May 20 23:23:49.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-testp9qvv -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
May 20 23:23:49.843: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
May 20 23:23:49.843: INFO: stdout: "10.244.3.31:43216"
STEP: Verifying the preserved source ip
May 20 23:23:49.843: INFO: Waiting up to 2m0s to get response from 10.244.0.7:8080
May 20 23:23:49.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-testp9qvv -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip'
May 20 23:23:50.102: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip\n"
May 20 23:23:50.102: INFO: stdout: "10.244.3.31:39042"
STEP: Verifying the preserved source ip
May 20 23:23:50.102: INFO: Waiting up to 2m0s to get response from 10.244.1.7:8080
May 20 23:23:50.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-testp9qvv -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.7:8080/clientip'
May 20 23:23:50.334: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.7:8080/clientip\n"
May 20 23:23:50.334: INFO: stdout: "10.244.3.31:54128"
STEP: Verifying the preserved source ip
May 20 23:23:50.334: INFO: Waiting up to 2m0s to get response from 10.244.4.202:8080
May 20 23:23:50.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-testp9qvv -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.202:8080/clientip'
May 20 23:23:50.651: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.202:8080/clientip\n"
May 20 23:23:50.651: INFO: stdout: "10.244.3.31:50892"
STEP: Verifying the preserved source ip
May 20 23:23:50.651: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
May 20 23:23:50.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-testqs8fj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
May 20 23:23:50.898: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
May 20 23:23:50.899: INFO: stdout: "10.244.4.202:39528"
STEP: Verifying the preserved source ip
May 20 23:23:50.899: INFO: Waiting up to 2m0s to get response from 10.244.0.7:8080
May 20 23:23:50.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-testqs8fj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip'
May 20 23:23:51.148: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip\n"
May 20 23:23:51.148: INFO: stdout: "10.244.4.202:55434"
STEP: Verifying the preserved source ip
May 20 23:23:51.148: INFO: Waiting up to 2m0s to get response from 10.244.1.7:8080
May 20 23:23:51.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-testqs8fj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.7:8080/clientip'
May 20 23:23:51.410: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.7:8080/clientip\n"
May 20 23:23:51.410: INFO: stdout: "10.244.4.202:39824"
STEP: Verifying the preserved source ip
May 20 23:23:51.410: INFO: Waiting up to 2m0s to get response from 10.244.3.31:8080
May 20 23:23:51.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5092 exec no-snat-testqs8fj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.31:8080/clientip'
May 20 23:23:51.739: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.31:8080/clientip\n"
May 20 23:23:51.739: INFO: stdout: "10.244.4.202:45734"
STEP: Verifying the preserved source ip
[AfterEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:51.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "no-snat-test-5092" for this suite.


• [SLOW TEST:15.224 seconds]
[sig-network] NoSNAT [Feature:NoSNAT] [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
------------------------------
{"msg":"PASSED [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT","total":-1,"completed":3,"skipped":374,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:24.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153
STEP: Performing setup for networking test in namespace nettest-7956
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 20 23:23:24.354: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:23:24.388: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:26.391: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:28.391: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:30.391: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:32.392: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:34.392: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:36.391: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:38.391: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:40.390: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:42.392: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:44.478: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:46.392: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:48.391: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:50.390: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 20 23:23:50.395: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 20 23:23:54.416: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 20 23:23:54.416: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:23:54.424: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:54.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7956" for this suite.


S [SKIPPING] [30.215 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:54.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
May 20 23:23:54.569: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:54.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-5778" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should handle updates to ExternalTrafficPolicy field [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:30.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
May 20 23:23:30.114: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:32.117: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:34.119: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:36.118: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
May 20 23:23:36.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1268 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
May 20 23:23:36.493: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n"
May 20 23:23:36.493: INFO: stdout: "iptables"
May 20 23:23:36.493: INFO: proxyMode: iptables
May 20 23:23:36.500: INFO: Waiting for pod kube-proxy-mode-detector to disappear
May 20 23:23:36.503: INFO: Pod kube-proxy-mode-detector no longer exists
STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-1268
May 20 23:23:36.508: INFO: sourceip-test cluster ip: 10.233.62.74
STEP: Picking 2 Nodes to test whether source IP is preserved or not
STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip
May 20 23:23:36.525: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:38.528: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:40.530: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:42.530: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:44.531: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:46.534: INFO: The status of Pod echo-sourceip is Running (Ready = true)
STEP: waiting up to 3m0s for service sourceip-test in namespace services-1268 to expose endpoints map[echo-sourceip:[8080]]
May 20 23:23:46.548: INFO: successfully validated that service sourceip-test in namespace services-1268 exposes endpoints map[echo-sourceip:[8080]]
STEP: Creating pause pod deployment
May 20 23:23:46.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
May 20 23:23:48.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788685826, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788685826, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788685826, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788685826, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6976b5d984\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 20 23:23:50.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788685826, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788685826, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788685828, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788685826, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6976b5d984\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 20 23:23:52.560: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788685826, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788685826, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788685828, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788685826, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6976b5d984\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 20 23:23:54.563: INFO: Waiting up to 2m0s to get response from 10.233.62.74:8080
May 20 23:23:54.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1268 exec pause-pod-6976b5d984-g2sdp -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.62.74:8080/clientip'
May 20 23:23:54.844: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.62.74:8080/clientip\n"
May 20 23:23:54.844: INFO: stdout: "10.244.4.207:52240"
STEP: Verifying the preserved source ip
May 20 23:23:54.844: INFO: Waiting up to 2m0s to get response from 10.233.62.74:8080
May 20 23:23:54.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1268 exec pause-pod-6976b5d984-sz52t -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.62.74:8080/clientip'
May 20 23:23:55.086: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.62.74:8080/clientip\n"
May 20 23:23:55.086: INFO: stdout: "10.244.3.33:53280"
STEP: Verifying the preserved source ip
May 20 23:23:55.087: INFO: Deleting deployment
May 20 23:23:55.091: INFO: Cleaning up the echo server pod
May 20 23:23:55.098: INFO: Cleaning up the sourceip test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:55.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1268" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:25.045 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":1,"skipped":235,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:50.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-7534
May 20 23:23:50.342: INFO: hairpin-test cluster ip: 10.233.35.247
STEP: creating a client/server pod
May 20 23:23:50.356: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:52.358: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:54.362: INFO: The status of Pod hairpin is Running (Ready = true)
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-7534 to expose endpoints map[hairpin:[8080]]
May 20 23:23:54.371: INFO: successfully validated that service hairpin-test in namespace services-7534 exposes endpoints map[hairpin:[8080]]
STEP: Checking if the pod can reach itself
May 20 23:23:55.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7534 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
May 20 23:23:55.848: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 hairpin-test 8080\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n"
May 20 23:23:55.848: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
May 20 23:23:55.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7534 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.35.247 8080'
May 20 23:23:56.104: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.35.247 8080\nConnection to 10.233.35.247 8080 port [tcp/http-alt] succeeded!\n"
May 20 23:23:56.104: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:23:56.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7534" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:5.800 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":4,"skipped":868,"failed":0}

SSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:36.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-de2f55f2-f295-4612-bbe8-e4084ced6688]
STEP: Verifying pods for RC slow-terminating-unready-pod
May 20 23:23:36.822: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
May 20 23:23:40.838: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-2zg9k]: "NOW: 2022-05-20 23:23:40.83652794 +0000 UTC m=+1.001827204", 1 of 1 required successes so far
STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-1918.svc.cluster.local
May 20 23:23:40.838: INFO: Creating new exec pod
May 20 23:23:52.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1918 exec execpod-bk9jh -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-1918.svc.cluster.local:80/'
May 20 23:23:53.147: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-1918.svc.cluster.local:80/\n"
May 20 23:23:53.147: INFO: stdout: "NOW: 2022-05-20 23:23:53.136085681 +0000 UTC m=+13.301384947"
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-1918 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
May 20 23:23:58.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1918 exec execpod-bk9jh -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-1918.svc.cluster.local:80/; test "$?" -ne "0"'
May 20 23:23:59.728: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-1918.svc.cluster.local:80/\n+ test 7 -ne 0\n"
May 20 23:23:59.729: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
May 20 23:23:59.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1918 exec execpod-bk9jh -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-1918.svc.cluster.local:80/'
May 20 23:24:00.046: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-1918.svc.cluster.local:80/\n"
May 20 23:24:00.046: INFO: stdout: "NOW: 2022-05-20 23:24:00.036810163 +0000 UTC m=+20.202109429"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-1918
STEP: deleting service tolerate-unready in namespace services-1918
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:24:00.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1918" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:23.299 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":2,"skipped":532,"failed":0}

SSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:22:35.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W0520 23:22:35.261499      38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May 20 23:22:35.261: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May 20 23:22:35.263: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
STEP: creating service-headless in namespace services-8601
STEP: creating service service-headless in namespace services-8601
STEP: creating replication controller service-headless in namespace services-8601
I0520 23:22:35.274759      38 runners.go:190] Created replication controller with name: service-headless, namespace: services-8601, replica count: 3
I0520 23:22:38.325750      38 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:22:41.327007      38 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:22:44.327395      38 runners.go:190] service-headless Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:22:47.330266      38 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-8601
STEP: creating service service-headless-toggled in namespace services-8601
STEP: creating replication controller service-headless-toggled in namespace services-8601
I0520 23:22:47.345611      38 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-8601, replica count: 3
I0520 23:22:50.397422      38 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:22:53.398505      38 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
May 20 23:22:53.401: INFO: Creating new host exec pod
May 20 23:22:53.414: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:55.418: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:57.419: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:59.419: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 20 23:22:59.420: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 20 23:23:05.436: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.12.249:80 2>&1 || true; echo; done" in pod services-8601/verify-service-up-host-exec-pod
May 20 23:23:05.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8601 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.12.249:80 2>&1 || true; echo; done'
May 20 23:23:06.076: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n"
May 20 23:23:06.076: INFO: stdout: "service-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\n"
May 20 23:23:06.077: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.12.249:80 2>&1 || true; echo; done" in pod services-8601/verify-service-up-exec-pod-jlmb8
May 20 23:23:06.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8601 exec verify-service-up-exec-pod-jlmb8 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.12.249:80 2>&1 || true; echo; done'
May 20 23:23:06.496: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n"
May 20 23:23:06.496: INFO: stdout: "service-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-8601
STEP: Deleting pod verify-service-up-exec-pod-jlmb8 in namespace services-8601
STEP: verifying service-headless is not up
May 20 23:23:06.512: INFO: Creating new host exec pod
May 20 23:23:06.525: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:08.528: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:10.529: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:12.530: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:14.529: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:16.528: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:18.528: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:20.529: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May 20 23:23:20.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8601 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.25.42:80 && echo service-down-failed'
May 20 23:23:22.825: INFO: rc: 28
May 20 23:23:22.825: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.25.42:80 && echo service-down-failed" in pod services-8601/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8601 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.25.42:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.25.42:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-8601
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
May 20 23:23:22.838: INFO: Creating new host exec pod
May 20 23:23:22.851: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:24.855: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:26.859: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:28.856: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:30.855: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:32.856: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:34.858: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:36.855: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:38.855: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:40.854: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:42.855: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May 20 23:23:42.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8601 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.12.249:80 && echo service-down-failed'
May 20 23:23:45.346: INFO: rc: 28
May 20 23:23:45.346: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.12.249:80 && echo service-down-failed" in pod services-8601/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8601 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.12.249:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.12.249:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-8601
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
May 20 23:23:45.361: INFO: Creating new host exec pod
May 20 23:23:45.374: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:47.377: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:49.377: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:51.378: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:53.376: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 20 23:23:53.376: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 20 23:23:59.393: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.12.249:80 2>&1 || true; echo; done" in pod services-8601/verify-service-up-host-exec-pod
May 20 23:23:59.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8601 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.12.249:80 2>&1 || true; echo; done'
May 20 23:24:00.089: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n"
May 20 23:24:00.089: INFO: stdout: "service-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\n"
May 20 23:24:00.089: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.12.249:80 2>&1 || true; echo; done" in pod services-8601/verify-service-up-exec-pod-j2n8d
May 20 23:24:00.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8601 exec verify-service-up-exec-pod-j2n8d -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.12.249:80 2>&1 || true; echo; done'
May 20 23:24:00.888: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.249:80\n+ echo\n"
May 20 23:24:00.889: INFO: stdout: "service-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-9dgr5\nservice-headless-toggled-9dgr5\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-9dgr5\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\nservice-headless-toggled-7l76n\nservice-headless-toggled-fl948\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-8601
STEP: Deleting pod verify-service-up-exec-pod-j2n8d in namespace services-8601
STEP: verifying service-headless is still not up
May 20 23:24:00.903: INFO: Creating new host exec pod
May 20 23:24:00.913: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:02.918: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:04.918: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:06.917: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May 20 23:24:06.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8601 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.25.42:80 && echo service-down-failed'
May 20 23:24:09.163: INFO: rc: 28
May 20 23:24:09.163: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.25.42:80 && echo service-down-failed" in pod services-8601/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8601 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.25.42:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.25.42:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-8601
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:24:09.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8601" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:93.944 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":1,"skipped":45,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:55.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
STEP: creating service externalip-test with type=clusterIP in namespace services-5506
STEP: creating replication controller externalip-test in namespace services-5506
I0520 23:23:55.363662      37 runners.go:190] Created replication controller with name: externalip-test, namespace: services-5506, replica count: 2
I0520 23:23:58.415880      37 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:24:01.417515      37 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:24:04.418072      37 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:24:07.418867      37 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 20 23:24:07.418: INFO: Creating new exec pod
May 20 23:24:12.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5506 exec execpodpmjvf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
May 20 23:24:12.720: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalip-test 80\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
May 20 23:24:12.721: INFO: stdout: "externalip-test-6wk8g"
May 20 23:24:12.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5506 exec execpodpmjvf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.62.220 80'
May 20 23:24:12.965: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.62.220 80\nConnection to 10.233.62.220 80 port [tcp/http] succeeded!\n"
May 20 23:24:12.965: INFO: stdout: "externalip-test-mlqz2"
May 20 23:24:12.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5506 exec execpodpmjvf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
May 20 23:24:13.221: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 203.0.113.250 80\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n"
May 20 23:24:13.221: INFO: stdout: "externalip-test-6wk8g"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:24:13.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5506" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:17.903 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":2,"skipped":351,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:24:13.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should prevent NodePort collisions
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440
STEP: creating service nodeport-collision-1 with type NodePort in namespace services-2595
STEP: creating service nodeport-collision-2 with conflicting NodePort
STEP: deleting service nodeport-collision-1 to release NodePort
STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort
STEP: deleting service nodeport-collision-2 in namespace services-2595
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:24:13.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2595" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":3,"skipped":456,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:55.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kube-proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
May 20 23:23:55.125: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:57.129: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:59.130: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:01.131: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:03.130: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:05.130: INFO: The status of Pod e2e-net-exec is Running (Ready = true)
STEP: Launching a server daemon on node node2 (node ip: 10.10.190.208, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
May 20 23:24:05.145: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:07.150: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:09.148: INFO: The status of Pod e2e-net-server is Running (Ready = true)
STEP: Launching a client connection on node node1 (node ip: 10.10.190.207, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
May 20 23:24:11.167: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:13.171: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:15.171: INFO: The status of Pod e2e-net-client is Running (Ready = true)
STEP: Checking conntrack entries for the timeout
May 20 23:24:15.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kube-proxy-3578 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 10.10.190.208 | grep -m 1 'CLOSE_WAIT.*dport=11302' '
May 20 23:24:15.452: INFO: stderr: "+ conntrack -L -f ipv4 -d 10.10.190.208\n+ grep -m 1 CLOSE_WAIT.*dport=11302\nconntrack v1.4.5 (conntrack-tools): 7 flow entries have been shown.\n"
May 20 23:24:15.452: INFO: stdout: "tcp      6 3597 CLOSE_WAIT src=10.244.4.214 dst=10.10.190.208 sport=35300 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=38088 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1\n"
May 20 23:24:15.452: INFO: conntrack entry for node 10.10.190.208 and port 11302:  tcp      6 3597 CLOSE_WAIT src=10.244.4.214 dst=10.10.190.208 sport=35300 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=38088 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1

[AfterEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:24:15.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kube-proxy-3578" for this suite.


• [SLOW TEST:20.378 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":3,"skipped":690,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:24:15.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename ingress
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69
May 20 23:24:15.636: INFO: Found ClusterRoles; assuming RBAC is enabled.
[BeforeEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688
May 20 23:24:15.741: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706
STEP: No ingress created, no cleanup necessary
[AfterEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:24:15.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-7913" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.146 seconds]
[sig-network] Loadbalancing: L7
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685
    should conform to Ingress spec [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722

    Only supported for providers [gce gke] (not local)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:24:16.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
May 20 23:24:16.209: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:24:16.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-259" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should only target nodes with endpoints [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:24:00.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: udp [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
STEP: Performing setup for networking test in namespace nettest-2356
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 20 23:24:00.244: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:24:00.279: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:02.283: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:04.285: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:06.282: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:08.283: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:10.284: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:12.284: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:14.282: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:16.282: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:18.282: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:20.282: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:22.284: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 20 23:24:22.290: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 20 23:24:26.328: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 20 23:24:26.328: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:24:26.334: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:24:26.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2356" for this suite.


S [SKIPPING] [26.212 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: udp [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:56.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: udp [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434
STEP: Performing setup for networking test in namespace nettest-934
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 20 23:23:56.272: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:23:56.302: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:58.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:00.306: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:02.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:04.308: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:06.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:08.307: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:10.306: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:12.307: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:14.306: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:16.306: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 20 23:24:16.311: INFO: The status of Pod netserver-1 is Running (Ready = false)
May 20 23:24:18.314: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 20 23:24:26.335: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 20 23:24:26.335: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:24:26.341: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:24:26.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-934" for this suite.


S [SKIPPING] [30.189 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: udp [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:24:26.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91
May 20 23:24:26.549: INFO: (0) /api/v1/nodes/node2/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-4618
STEP: creating a client pod for probing the service svc-udp
May 20 23:23:51.888: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:53.892: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:55.891: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:57.893: INFO: The status of Pod pod-client is Running (Ready = true)
May 20 23:23:57.905: INFO: Pod client logs: Fri May 20 23:23:54 UTC 2022
Fri May 20 23:23:54 UTC 2022 Try: 1

Fri May 20 23:23:54 UTC 2022 Try: 2

Fri May 20 23:23:54 UTC 2022 Try: 3

Fri May 20 23:23:54 UTC 2022 Try: 4

Fri May 20 23:23:54 UTC 2022 Try: 5

Fri May 20 23:23:54 UTC 2022 Try: 6

Fri May 20 23:23:54 UTC 2022 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
May 20 23:23:57.918: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:59.922: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:01.923: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:03.923: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-4618 to expose endpoints map[pod-server-1:[80]]
May 20 23:24:03.935: INFO: successfully validated that service svc-udp in namespace conntrack-4618 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
STEP: creating a second backend pod pod-server-2 for the service svc-udp
May 20 23:24:13.958: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:15.963: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:17.964: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:19.962: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:21.964: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:23.964: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:25.963: INFO: The status of Pod pod-server-2 is Running (Ready = true)
May 20 23:24:25.965: INFO: Cleaning up pod-server-1 pod
May 20 23:24:25.973: INFO: Waiting for pod pod-server-1 to disappear
May 20 23:24:25.975: INFO: Pod pod-server-1 no longer exists
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-4618 to expose endpoints map[pod-server-2:[80]]
May 20 23:24:25.984: INFO: successfully validated that service svc-udp in namespace conntrack-4618 exposes endpoints map[pod-server-2:[80]]
STEP: checking client pod connected to the backend 2 on Node IP 10.10.190.208
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:24:36.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-4618" for this suite.


• [SLOW TEST:44.485 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":4,"skipped":414,"failed":0}
May 20 23:24:36.322: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:22:35.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198
STEP: Performing setup for networking test in namespace nettest-6735
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 20 23:22:35.937: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:22:35.968: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:37.972: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:39.971: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:41.972: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:43.972: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:45.971: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:22:47.972: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:22:49.972: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:22:51.973: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:22:53.972: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:22:55.972: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:22:57.973: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:22:59.971: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:01.973: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:03.971: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:05.971: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:07.976: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:23:09.971: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 20 23:23:09.977: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 20 23:23:18.018: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 20 23:23:18.018: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
May 20 23:23:18.042: INFO: Service node-port-service in namespace nettest-6735 found.
May 20 23:23:18.057: INFO: Service session-affinity-service in namespace nettest-6735 found.
STEP: Waiting for NodePort service to expose endpoint
May 20 23:23:19.061: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
May 20 23:23:20.064: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) 10.10.190.207 (node) --> 10.233.40.232:80 (config.clusterIP)
May 20 23:23:20.067: INFO: Going to poll 10.233.40.232 on port 80 at least 0 times, with a maximum of 34 tries before failing
May 20 23:23:20.070: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.233.40.232:80/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:20.070: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:20.478: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-1])
May 20 23:23:22.483: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.233.40.232:80/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:22.483: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:22.584: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1]
STEP: dialing(http) 10.10.190.207 (node) --> 10.10.190.207:32102 (nodeIP)
May 20 23:23:22.584: INFO: Going to poll 10.10.190.207 on port 32102 at least 0 times, with a maximum of 34 tries before failing
May 20 23:23:22.586: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:22.587: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:22.676: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:22.676: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:23:24.681: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:24.681: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:25.278: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:25.278: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:23:27.286: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:27.286: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:27.818: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:27.818: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:23:29.822: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:29.822: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:30.222: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:30.222: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:23:32.225: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:32.225: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:32.375: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:32.375: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:23:34.379: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:34.379: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:34.490: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:34.490: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:23:36.494: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:36.494: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:36.690: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:36.690: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:23:38.693: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:38.693: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:39.053: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:39.053: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:23:41.060: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:41.060: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:41.249: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:41.249: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:23:43.367: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:43.367: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:43.622: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:43.622: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:23:45.626: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:45.626: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:45.754: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:45.754: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:23:47.761: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:47.761: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:47.926: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:47.926: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:23:49.929: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:49.929: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:50.294: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:50.294: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:23:52.299: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:52.299: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:52.402: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:52.402: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:23:54.407: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:54.407: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:54.578: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:54.578: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:23:56.583: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:56.583: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:56.798: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:56.798: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:23:58.803: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:23:58.803: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:23:59.071: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:23:59.071: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:01.076: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:01.076: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:01.204: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:01.204: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:03.207: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:03.208: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:03.363: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:03.363: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:05.367: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:05.367: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:05.497: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:05.497: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:07.502: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:07.502: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:07.585: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:07.585: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:09.591: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:09.591: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:09.677: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:09.677: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:11.682: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:11.682: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:11.863: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:11.863: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:13.868: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:13.868: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:14.036: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:14.036: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:16.040: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:16.040: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:16.264: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:16.264: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:18.268: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:18.268: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:18.356: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:18.356: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:20.360: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:20.360: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:20.475: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:20.475: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:22.478: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:22.478: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:22.612: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:22.612: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:24.614: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:24.614: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:24.710: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:24.710: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:26.713: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:26.713: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:26.813: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:26.813: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:28.817: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:28.817: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:29.111: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:29.111: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:31.118: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:31.118: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:31.215: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:31.215: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:33.218: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:33.218: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:34.327: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:34.327: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:36.330: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\s*$'] Namespace:nettest-6735 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:36.330: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:36.516: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
May 20 23:24:36.516: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
May 20 23:24:38.517: INFO: 
Output of kubectl describe pod nettest-6735/netserver-0:

May 20 23:24:38.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-6735 describe pod netserver-0 --namespace=nettest-6735'
May 20 23:24:38.725: INFO: stderr: ""
May 20 23:24:38.725: INFO: stdout: "Name:         netserver-0\nNamespace:    nettest-6735\nPriority:     0\nNode:         node1/10.10.190.207\nStart Time:   Fri, 20 May 2022 23:22:35 +0000\nLabels:       selector-6cacff6a-9db1-4515-8f9c-9c320e208311=true\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.179\"\n                    ],\n                    \"mac\": \"fa:e6:78:d9:cd:4a\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.179\"\n                    ],\n                    \"mac\": \"fa:e6:78:d9:cd:4a\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.4.179\nIPs:\n  IP:  10.244.4.179\nContainers:\n  webserver:\n    Container ID:  docker://1cfba9549c96b9cecc679b748a7fb438538867414c39ba67b6ae40352b2ba4e5\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Fri, 20 May 2022 23:22:46 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mzb9r (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-mzb9r:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node1\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  2m2s  default-scheduler  Successfully assigned nettest-6735/netserver-0 to node1\n  Normal  Pulling    115s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     113s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 1.591861463s\n  Normal  Created    113s  kubelet            Created container webserver\n  Normal  Started    112s  kubelet            Started container webserver\n"
May 20 23:24:38.725: INFO: Name:         netserver-0
Namespace:    nettest-6735
Priority:     0
Node:         node1/10.10.190.207
Start Time:   Fri, 20 May 2022 23:22:35 +0000
Labels:       selector-6cacff6a-9db1-4515-8f9c-9c320e208311=true
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.4.179"
                    ],
                    "mac": "fa:e6:78:d9:cd:4a",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.4.179"
                    ],
                    "mac": "fa:e6:78:d9:cd:4a",
                    "default": true,
                    "dns": {}
                }]
              kubernetes.io/psp: collectd
Status:       Running
IP:           10.244.4.179
IPs:
  IP:  10.244.4.179
Containers:
  webserver:
    Container ID:  docker://1cfba9549c96b9cecc679b748a7fb438538867414c39ba67b6ae40352b2ba4e5
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    0/TCP, 0/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
    State:          Running
      Started:      Fri, 20 May 2022 23:22:46 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mzb9r (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-mzb9r:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node1
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  2m2s  default-scheduler  Successfully assigned nettest-6735/netserver-0 to node1
  Normal  Pulling    115s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     113s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.591861463s
  Normal  Created    113s  kubelet            Created container webserver
  Normal  Started    112s  kubelet            Started container webserver

May 20 23:24:38.725: INFO: 
Output of kubectl describe pod nettest-6735/netserver-1:

May 20 23:24:38.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-6735 describe pod netserver-1 --namespace=nettest-6735'
May 20 23:24:38.908: INFO: stderr: ""
May 20 23:24:38.908: INFO: stdout: "Name:         netserver-1\nNamespace:    nettest-6735\nPriority:     0\nNode:         node2/10.10.190.208\nStart Time:   Fri, 20 May 2022 23:22:35 +0000\nLabels:       selector-6cacff6a-9db1-4515-8f9c-9c320e208311=true\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.5\"\n                    ],\n                    \"mac\": \"9e:48:c3:7b:44:ef\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.5\"\n                    ],\n                    \"mac\": \"9e:48:c3:7b:44:ef\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.3.5\nIPs:\n  IP:  10.244.3.5\nContainers:\n  webserver:\n    Container ID:  docker://ef875d73e51e807a8775baace7612016c8097235e36a78645f3bb22733bb7597\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Fri, 20 May 2022 23:22:46 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gnbd8 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-gnbd8:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node2\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  2m2s  default-scheduler  Successfully assigned nettest-6735/netserver-1 to node2\n  Normal  Pulling    114s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     113s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 1.274279987s\n  Normal  Created    113s  kubelet            Created container webserver\n  Normal  Started    112s  kubelet            Started container webserver\n"
May 20 23:24:38.908: INFO: Name:         netserver-1
Namespace:    nettest-6735
Priority:     0
Node:         node2/10.10.190.208
Start Time:   Fri, 20 May 2022 23:22:35 +0000
Labels:       selector-6cacff6a-9db1-4515-8f9c-9c320e208311=true
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.3.5"
                    ],
                    "mac": "9e:48:c3:7b:44:ef",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.3.5"
                    ],
                    "mac": "9e:48:c3:7b:44:ef",
                    "default": true,
                    "dns": {}
                }]
              kubernetes.io/psp: collectd
Status:       Running
IP:           10.244.3.5
IPs:
  IP:  10.244.3.5
Containers:
  webserver:
    Container ID:  docker://ef875d73e51e807a8775baace7612016c8097235e36a78645f3bb22733bb7597
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    0/TCP, 0/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
    State:          Running
      Started:      Fri, 20 May 2022 23:22:46 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gnbd8 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-gnbd8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node2
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  2m2s  default-scheduler  Successfully assigned nettest-6735/netserver-1 to node2
  Normal  Pulling    114s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     113s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.274279987s
  Normal  Created    113s  kubelet            Created container webserver
  Normal  Started    112s  kubelet            Started container webserver

May 20 23:24:38.909: FAIL: failed dialing endpoint, failed to find expected endpoints, 
tries 34
Command curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName
retrieved map[]
expected map[netserver-0:{} netserver-1:{}]

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001f81200)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001f81200)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001f81200, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "nettest-6735".
STEP: Found 20 events.
May 20 23:24:38.914: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for host-test-container-pod: { } Scheduled: Successfully assigned nettest-6735/host-test-container-pod to node1
May 20 23:24:38.914: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-0: { } Scheduled: Successfully assigned nettest-6735/netserver-0 to node1
May 20 23:24:38.914: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-1: { } Scheduled: Successfully assigned nettest-6735/netserver-1 to node2
May 20 23:24:38.914: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-container-pod: { } Scheduled: Successfully assigned nettest-6735/test-container-pod to node2
May 20 23:24:38.914: INFO: At 2022-05-20 23:22:43 +0000 UTC - event for netserver-0: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 20 23:24:38.914: INFO: At 2022-05-20 23:22:44 +0000 UTC - event for netserver-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 20 23:24:38.914: INFO: At 2022-05-20 23:22:45 +0000 UTC - event for netserver-0: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.591861463s
May 20 23:24:38.914: INFO: At 2022-05-20 23:22:45 +0000 UTC - event for netserver-0: {kubelet node1} Created: Created container webserver
May 20 23:24:38.914: INFO: At 2022-05-20 23:22:45 +0000 UTC - event for netserver-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.274279987s
May 20 23:24:38.914: INFO: At 2022-05-20 23:22:45 +0000 UTC - event for netserver-1: {kubelet node2} Created: Created container webserver
May 20 23:24:38.914: INFO: At 2022-05-20 23:22:46 +0000 UTC - event for netserver-0: {kubelet node1} Started: Started container webserver
May 20 23:24:38.914: INFO: At 2022-05-20 23:22:46 +0000 UTC - event for netserver-1: {kubelet node2} Started: Started container webserver
May 20 23:24:38.914: INFO: At 2022-05-20 23:23:12 +0000 UTC - event for host-test-container-pod: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 20 23:24:38.914: INFO: At 2022-05-20 23:23:12 +0000 UTC - event for host-test-container-pod: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 386.851225ms
May 20 23:24:38.914: INFO: At 2022-05-20 23:23:13 +0000 UTC - event for host-test-container-pod: {kubelet node1} Created: Created container agnhost-container
May 20 23:24:38.914: INFO: At 2022-05-20 23:23:13 +0000 UTC - event for host-test-container-pod: {kubelet node1} Started: Started container agnhost-container
May 20 23:24:38.914: INFO: At 2022-05-20 23:23:14 +0000 UTC - event for test-container-pod: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 20 23:24:38.914: INFO: At 2022-05-20 23:23:14 +0000 UTC - event for test-container-pod: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 320.833611ms
May 20 23:24:38.914: INFO: At 2022-05-20 23:23:15 +0000 UTC - event for test-container-pod: {kubelet node2} Created: Created container webserver
May 20 23:24:38.914: INFO: At 2022-05-20 23:23:15 +0000 UTC - event for test-container-pod: {kubelet node2} Started: Started container webserver
May 20 23:24:38.917: INFO: POD                      NODE   PHASE    GRACE  CONDITIONS
May 20 23:24:38.917: INFO: host-test-container-pod  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:09 +0000 UTC  }]
May 20 23:24:38.917: INFO: netserver-0              node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:22:35 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:22:35 +0000 UTC  }]
May 20 23:24:38.917: INFO: netserver-1              node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:22:35 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:22:35 +0000 UTC  }]
May 20 23:24:38.918: INFO: test-container-pod       node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:09 +0000 UTC  }]
May 20 23:24:38.918: INFO: 
May 20 23:24:38.922: INFO: 
Logging node info for node master1
May 20 23:24:38.925: INFO: Node Info: &Node{ObjectMeta:{master1    b016dcf2-74b7-4456-916a-8ca363b9ccc3 75623 0 2022-05-20 20:01:28 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-20 20:01:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-20 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-20 20:09:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-20 20:12:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:07 +0000 UTC,LastTransitionTime:2022-05-20 20:07:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:30 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:30 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:30 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 23:24:30 +0000 UTC,LastTransitionTime:2022-05-20 20:04:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e9847a94929d4465bdf672fd6e82b77d,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a01e5bd5-a73c-4ab6-b80a-cab509b05bc6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 20 23:24:38.926: INFO: 
Logging kubelet events for node master1
May 20 23:24:38.928: INFO: 
Logging pods the kubelet thinks is on node master1
May 20 23:24:38.943: INFO: kube-multus-ds-amd64-k8cb6 started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:38.943: INFO: 	Container kube-multus ready: true, restart count 1
May 20 23:24:38.943: INFO: container-registry-65d7c44b96-n94w5 started at 2022-05-20 20:08:47 +0000 UTC (0+2 container statuses recorded)
May 20 23:24:38.943: INFO: 	Container docker-registry ready: true, restart count 0
May 20 23:24:38.943: INFO: 	Container nginx ready: true, restart count 0
May 20 23:24:38.943: INFO: prometheus-operator-585ccfb458-bl62n started at 2022-05-20 20:17:13 +0000 UTC (0+2 container statuses recorded)
May 20 23:24:38.943: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:24:38.943: INFO: 	Container prometheus-operator ready: true, restart count 0
May 20 23:24:38.943: INFO: kube-scheduler-master1 started at 2022-05-20 20:20:27 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:38.943: INFO: 	Container kube-scheduler ready: true, restart count 1
May 20 23:24:38.943: INFO: kube-apiserver-master1 started at 2022-05-20 20:02:32 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:38.943: INFO: 	Container kube-apiserver ready: true, restart count 0
May 20 23:24:38.943: INFO: kube-controller-manager-master1 started at 2022-05-20 20:10:37 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:38.943: INFO: 	Container kube-controller-manager ready: true, restart count 3
May 20 23:24:38.943: INFO: kube-proxy-rgxh2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:38.943: INFO: 	Container kube-proxy ready: true, restart count 2
May 20 23:24:38.943: INFO: kube-flannel-tzq8g started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded)
May 20 23:24:38.943: INFO: 	Init container install-cni ready: true, restart count 2
May 20 23:24:38.943: INFO: 	Container kube-flannel ready: true, restart count 1
May 20 23:24:38.943: INFO: node-feature-discovery-controller-cff799f9f-nq7tc started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:38.943: INFO: 	Container nfd-controller ready: true, restart count 0
May 20 23:24:38.943: INFO: node-exporter-4rvrg started at 2022-05-20 20:17:21 +0000 UTC (0+2 container statuses recorded)
May 20 23:24:38.943: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:24:38.943: INFO: 	Container node-exporter ready: true, restart count 0
May 20 23:24:39.053: INFO: 
Latency metrics for node master1
May 20 23:24:39.053: INFO: 
Logging node info for node master2
May 20 23:24:39.055: INFO: Node Info: &Node{ObjectMeta:{master2    ddc04b08-e43a-4e18-a612-aa3bf7f8411e 75576 0 2022-05-20 20:01:56 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-20 20:01:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:29 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:29 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:29 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 23:24:29 +0000 UTC,LastTransitionTime:2022-05-20 20:04:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:63d829bfe81540169bcb84ee465e884a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fc4aead3-0f07-477a-9f91-3902c50ddf48,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 20 23:24:39.056: INFO: 
Logging kubelet events for node master2
May 20 23:24:39.058: INFO: 
Logging pods the kubelet thinks is on node master2
May 20 23:24:39.073: INFO: kube-multus-ds-amd64-97fkc started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.074: INFO: 	Container kube-multus ready: true, restart count 1
May 20 23:24:39.074: INFO: kube-scheduler-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.074: INFO: 	Container kube-scheduler ready: true, restart count 3
May 20 23:24:39.074: INFO: kube-controller-manager-master2 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.074: INFO: 	Container kube-controller-manager ready: true, restart count 2
May 20 23:24:39.074: INFO: kube-proxy-wfzg2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.074: INFO: 	Container kube-proxy ready: true, restart count 1
May 20 23:24:39.074: INFO: kube-flannel-wj7hl started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded)
May 20 23:24:39.074: INFO: 	Init container install-cni ready: true, restart count 2
May 20 23:24:39.074: INFO: 	Container kube-flannel ready: true, restart count 1
May 20 23:24:39.074: INFO: coredns-8474476ff8-tjnfw started at 2022-05-20 20:04:46 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.074: INFO: 	Container coredns ready: true, restart count 1
May 20 23:24:39.074: INFO: dns-autoscaler-7df78bfcfb-5qj9t started at 2022-05-20 20:04:48 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.074: INFO: 	Container autoscaler ready: true, restart count 1
May 20 23:24:39.074: INFO: node-exporter-jfg4p started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded)
May 20 23:24:39.074: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:24:39.074: INFO: 	Container node-exporter ready: true, restart count 0
May 20 23:24:39.074: INFO: kube-apiserver-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.074: INFO: 	Container kube-apiserver ready: true, restart count 0
May 20 23:24:39.160: INFO: 
Latency metrics for node master2
May 20 23:24:39.160: INFO: 
Logging node info for node master3
May 20 23:24:39.163: INFO: Node Info: &Node{ObjectMeta:{master3    f42c1bd6-d828-4857-9180-56c73dcc370f 75784 0 2022-05-20 20:02:05 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-20 20:02:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:09 +0000 UTC,LastTransitionTime:2022-05-20 20:07:09 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:37 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:37 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:37 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 23:24:37 +0000 UTC,LastTransitionTime:2022-05-20 20:04:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6a2131d65a6f41c3b857ed7d5f7d9f9f,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fa6d1c6-058c-482a-97f3-d7e9e817b36a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 20 23:24:39.163: INFO: 
Logging kubelet events for node master3
May 20 23:24:39.165: INFO: 
Logging pods the kubelet thinks is on node master3
May 20 23:24:39.180: INFO: kube-controller-manager-master3 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.180: INFO: 	Container kube-controller-manager ready: true, restart count 1
May 20 23:24:39.180: INFO: kube-scheduler-master3 started at 2022-05-20 20:02:33 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.180: INFO: 	Container kube-scheduler ready: true, restart count 2
May 20 23:24:39.180: INFO: kube-proxy-rsqzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.180: INFO: 	Container kube-proxy ready: true, restart count 2
May 20 23:24:39.180: INFO: kube-flannel-bwb5w started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded)
May 20 23:24:39.180: INFO: 	Init container install-cni ready: true, restart count 0
May 20 23:24:39.180: INFO: 	Container kube-flannel ready: true, restart count 2
May 20 23:24:39.180: INFO: kube-apiserver-master3 started at 2022-05-20 20:02:05 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.180: INFO: 	Container kube-apiserver ready: true, restart count 0
May 20 23:24:39.180: INFO: kube-multus-ds-amd64-ch8bd started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.180: INFO: 	Container kube-multus ready: true, restart count 1
May 20 23:24:39.180: INFO: coredns-8474476ff8-4szxh started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.180: INFO: 	Container coredns ready: true, restart count 1
May 20 23:24:39.180: INFO: node-exporter-zgxkr started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded)
May 20 23:24:39.180: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:24:39.180: INFO: 	Container node-exporter ready: true, restart count 0
May 20 23:24:39.275: INFO: 
Latency metrics for node master3
May 20 23:24:39.275: INFO: 
Logging node info for node node1
May 20 23:24:39.278: INFO: Node Info: &Node{ObjectMeta:{node1    65c381dd-b6f5-4e67-a327-7a45366d15af 75732 0 2022-05-20 20:03:10 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-20 22:31:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-05-20 22:57:29 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:35 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:35 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:35 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 23:24:35 +0000 UTC,LastTransitionTime:2022-05-20 20:04:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f2f0a31e38e446cda6cf4c679d8a2ef5,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:c988afd2-8149-4515-9a6f-832552c2ed2d,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977757,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 20 23:24:39.280: INFO: 
Logging kubelet events for node node1
May 20 23:24:39.283: INFO: 
Logging pods the kubelet thinks is on node node1
May 20 23:24:39.318: INFO: kube-multus-ds-amd64-krd6m started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container kube-multus ready: true, restart count 1
May 20 23:24:39.318: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container kubernetes-dashboard ready: true, restart count 2
May 20 23:24:39.318: INFO: service-proxy-disabled-pmwdl started at 2022-05-20 23:24:26 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container service-proxy-disabled ready: true, restart count 0
May 20 23:24:39.318: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container kube-sriovdp ready: true, restart count 0
May 20 23:24:39.318: INFO: node-exporter-czwvh started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:24:39.318: INFO: 	Container node-exporter ready: true, restart count 0
May 20 23:24:39.318: INFO: pod-client started at 2022-05-20 23:23:51 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container pod-client ready: true, restart count 0
May 20 23:24:39.318: INFO: startup-script started at 2022-05-20 23:23:35 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container startup-script ready: true, restart count 0
May 20 23:24:39.318: INFO: netserver-0 started at 2022-05-20 23:23:56 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container webserver ready: false, restart count 0
May 20 23:24:39.318: INFO: nginx-proxy-node1 started at 2022-05-20 20:06:57 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container nginx-proxy ready: true, restart count 2
May 20 23:24:39.318: INFO: prometheus-k8s-0 started at 2022-05-20 20:17:30 +0000 UTC (0+4 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container config-reloader ready: true, restart count 0
May 20 23:24:39.318: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
May 20 23:24:39.318: INFO: 	Container grafana ready: true, restart count 0
May 20 23:24:39.318: INFO: 	Container prometheus ready: true, restart count 1
May 20 23:24:39.318: INFO: e2e-net-exec started at 2022-05-20 23:23:55 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container e2e-net-exec ready: true, restart count 0
May 20 23:24:39.318: INFO: up-down-2-whrhh started at 2022-05-20 23:23:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container up-down-2 ready: true, restart count 0
May 20 23:24:39.318: INFO: iperf2-clients-bqjc8 started at 2022-05-20 23:24:13 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container iperf2-client ready: true, restart count 0
May 20 23:24:39.318: INFO: collectd-875j8 started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container collectd ready: true, restart count 0
May 20 23:24:39.318: INFO: 	Container collectd-exporter ready: true, restart count 0
May 20 23:24:39.318: INFO: 	Container rbac-proxy ready: true, restart count 0
May 20 23:24:39.318: INFO: netserver-0 started at 2022-05-20 23:24:16 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container webserver ready: true, restart count 0
May 20 23:24:39.318: INFO: node-feature-discovery-worker-rh55h started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container nfd-worker ready: true, restart count 0
May 20 23:24:39.318: INFO: cmk-init-discover-node1-vkzkd started at 2022-05-20 20:15:33 +0000 UTC (0+3 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container discover ready: false, restart count 0
May 20 23:24:39.318: INFO: 	Container init ready: false, restart count 0
May 20 23:24:39.318: INFO: 	Container install ready: false, restart count 0
May 20 23:24:39.318: INFO: service-proxy-disabled-9hsfn started at 2022-05-20 23:24:26 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container service-proxy-disabled ready: true, restart count 0
May 20 23:24:39.318: INFO: netserver-0 started at 2022-05-20 23:22:35 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container webserver ready: true, restart count 0
May 20 23:24:39.318: INFO: kube-flannel-2blt7 started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Init container install-cni ready: true, restart count 2
May 20 23:24:39.318: INFO: 	Container kube-flannel ready: true, restart count 3
May 20 23:24:39.318: INFO: host-test-container-pod started at 2022-05-20 23:23:10 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container agnhost-container ready: true, restart count 0
May 20 23:24:39.318: INFO: up-down-3-txdwc started at 2022-05-20 23:24:30 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container up-down-3 ready: true, restart count 0
May 20 23:24:39.318: INFO: pod-client started at 2022-05-20 23:23:46 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container pod-client ready: true, restart count 0
May 20 23:24:39.318: INFO: kube-proxy-v8kzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container kube-proxy ready: true, restart count 2
May 20 23:24:39.318: INFO: cmk-c5x47 started at 2022-05-20 20:16:15 +0000 UTC (0+2 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container nodereport ready: true, restart count 0
May 20 23:24:39.318: INFO: 	Container reconcile ready: true, restart count 0
May 20 23:24:39.318: INFO: up-down-2-6pjb4 started at 2022-05-20 23:23:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.318: INFO: 	Container up-down-2 ready: true, restart count 0
May 20 23:24:39.318: INFO: service-proxy-disabled-bd8ld started at 2022-05-20 23:24:26 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.319: INFO: 	Container service-proxy-disabled ready: true, restart count 0
May 20 23:24:39.319: INFO: up-down-3-s8wr5 started at 2022-05-20 23:24:30 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.319: INFO: 	Container up-down-3 ready: true, restart count 0
May 20 23:24:39.617: INFO: 
Latency metrics for node node1
May 20 23:24:39.617: INFO: 
Logging node info for node node2
May 20 23:24:39.620: INFO: Node Info: &Node{ObjectMeta:{node2    a0e0a426-876d-4419-96e4-c6977ef3393c 75711 0 2022-05-20 20:03:09 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-20 20:03:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2022-05-20 22:31:06 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2022-05-20 22:31:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:33 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:33 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:33 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 23:24:33 +0000 UTC,LastTransitionTime:2022-05-20 20:07:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a6deb87c5d6d4ca89be50c8f447a0e3c,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:67af2183-25fe-4024-95ea-e80edf7c8695,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 20 23:24:39.621: INFO: 
Logging kubelet events for node node2
May 20 23:24:39.623: INFO: 
Logging pods the kubelet thinks is on node node2
May 20 23:24:39.640: INFO: cmk-webhook-6c9d5f8578-5kbbc started at 2022-05-20 20:16:16 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.640: INFO: 	Container cmk-webhook ready: true, restart count 0
May 20 23:24:39.640: INFO: up-down-3-qrm62 started at 2022-05-20 23:24:30 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.640: INFO: 	Container up-down-3 ready: true, restart count 0
May 20 23:24:39.640: INFO: service-proxy-toggled-zg2kg started at 2022-05-20 23:24:35 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.640: INFO: 	Container service-proxy-toggled ready: false, restart count 0
May 20 23:24:39.640: INFO: netserver-1 started at 2022-05-20 23:23:56 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.640: INFO: 	Container webserver ready: false, restart count 0
May 20 23:24:39.640: INFO: kube-multus-ds-amd64-p22zp started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.640: INFO: 	Container kube-multus ready: true, restart count 1
May 20 23:24:39.640: INFO: kubernetes-metrics-scraper-5558854cb-66r9g started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.640: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
May 20 23:24:39.641: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd started at 2022-05-20 20:20:26 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container tas-extender ready: true, restart count 0
May 20 23:24:39.641: INFO: boom-server started at 2022-05-20 23:23:27 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container boom-server ready: true, restart count 0
May 20 23:24:39.641: INFO: cmk-init-discover-node2-b7gw4 started at 2022-05-20 20:15:53 +0000 UTC (0+3 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container discover ready: false, restart count 0
May 20 23:24:39.641: INFO: 	Container init ready: false, restart count 0
May 20 23:24:39.641: INFO: 	Container install ready: false, restart count 0
May 20 23:24:39.641: INFO: collectd-h4pzk started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container collectd ready: true, restart count 0
May 20 23:24:39.641: INFO: 	Container collectd-exporter ready: true, restart count 0
May 20 23:24:39.641: INFO: 	Container rbac-proxy ready: true, restart count 0
May 20 23:24:39.641: INFO: up-down-2-b7fkw started at 2022-05-20 23:23:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container up-down-2 ready: true, restart count 0
May 20 23:24:39.641: INFO: nodeport-update-service-skqd5 started at 2022-05-20 23:24:13 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container nodeport-update-service ready: true, restart count 0
May 20 23:24:39.641: INFO: nginx-proxy-node2 started at 2022-05-20 20:03:09 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container nginx-proxy ready: true, restart count 2
May 20 23:24:39.641: INFO: kube-proxy-rg2fp started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container kube-proxy ready: true, restart count 2
May 20 23:24:39.641: INFO: kube-flannel-jpmpd started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Init container install-cni ready: true, restart count 1
May 20 23:24:39.641: INFO: 	Container kube-flannel ready: true, restart count 2
May 20 23:24:39.641: INFO: verify-service-up-host-exec-pod started at 2022-05-20 23:24:39 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container agnhost-container ready: false, restart count 0
May 20 23:24:39.641: INFO: iperf2-server-deployment-59979d877-bmv7x started at 2022-05-20 23:24:09 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container iperf2-server ready: true, restart count 0
May 20 23:24:39.641: INFO: netserver-1 started at 2022-05-20 23:22:35 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container webserver ready: true, restart count 0
May 20 23:24:39.641: INFO: pod-server-1 started at 2022-05-20 23:23:54 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container agnhost-container ready: true, restart count 0
May 20 23:24:39.641: INFO: iperf2-clients-ql5nt started at 2022-05-20 23:24:13 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container iperf2-client ready: true, restart count 0
May 20 23:24:39.641: INFO: netserver-1 started at 2022-05-20 23:24:16 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container webserver ready: true, restart count 0
May 20 23:24:39.641: INFO: service-proxy-toggled-x4qjm started at 2022-05-20 23:24:35 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container service-proxy-toggled ready: false, restart count 0
May 20 23:24:39.641: INFO: node-feature-discovery-worker-nphk9 started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container nfd-worker ready: true, restart count 0
May 20 23:24:39.641: INFO: service-proxy-toggled-bch5g started at 2022-05-20 23:24:35 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container service-proxy-toggled ready: false, restart count 0
May 20 23:24:39.641: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container kube-sriovdp ready: true, restart count 0
May 20 23:24:39.641: INFO: cmk-9hxtl started at 2022-05-20 20:16:16 +0000 UTC (0+2 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container nodereport ready: true, restart count 0
May 20 23:24:39.641: INFO: 	Container reconcile ready: true, restart count 0
May 20 23:24:39.641: INFO: node-exporter-vm24n started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:24:39.641: INFO: 	Container node-exporter ready: true, restart count 0
May 20 23:24:39.641: INFO: pod-server-2 started at 2022-05-20 23:24:13 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container agnhost-container ready: true, restart count 0
May 20 23:24:39.641: INFO: test-container-pod started at 2022-05-20 23:23:10 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container webserver ready: true, restart count 0
May 20 23:24:39.641: INFO: test-container-pod started at 2022-05-20 23:24:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container webserver ready: false, restart count 0
May 20 23:24:39.641: INFO: nodeport-update-service-lrn9m started at 2022-05-20 23:24:13 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container nodeport-update-service ready: true, restart count 0
May 20 23:24:39.641: INFO: execpodpnrdf started at 2022-05-20 23:24:26 +0000 UTC (0+1 container statuses recorded)
May 20 23:24:39.641: INFO: 	Container agnhost-container ready: true, restart count 0
May 20 23:24:40.428: INFO: 
Latency metrics for node node2
May 20 23:24:40.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6735" for this suite.


• Failure [124.630 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198

    May 20 23:24:38.909: failed dialing endpoint, failed to find expected endpoints, 
    tries 34
    Command curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:32102/hostName
    retrieved map[]
    expected map[netserver-0:{} netserver-1:{}]

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:24:16.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
STEP: Performing setup for networking test in namespace nettest-4948
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May 20 23:24:16.416: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:24:16.454: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:18.458: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:20.459: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:22.459: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:24.458: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:26.457: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:28.458: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:30.459: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:32.458: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:34.458: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:36.459: INFO: The status of Pod netserver-0 is Running (Ready = false)
May 20 23:24:38.458: INFO: The status of Pod netserver-0 is Running (Ready = true)
May 20 23:24:38.464: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May 20 23:24:42.489: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May 20 23:24:42.489: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May 20 23:24:42.496: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:24:42.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4948" for this suite.


S [SKIPPING] [26.206 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
May 20 23:24:42.508: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:27.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
May 20 23:23:27.264: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:29.269: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:31.269: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:33.267: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:35.268: INFO: The status of Pod boom-server is Running (Ready = true)
STEP: Server pod created on node node2
STEP: Server service created
May 20 23:23:35.288: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:37.292: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:39.291: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:41.292: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:43.367: INFO: The status of Pod startup-script is Running (Ready = true)
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
May 20 23:24:43.398: INFO: boom-server pod logs: 2022/05/20 23:23:34 external ip: 10.244.3.29
2022/05/20 23:23:34 listen on 0.0.0.0:9000
2022/05/20 23:23:34 probing 10.244.3.29
2022/05/20 23:23:40 tcp packet: &{SrcPort:44785 DestPort:9000 Seq:3048573839 Ack:0 Flags:40962 WindowSize:29200 Checksum:17394 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:23:40 tcp packet: &{SrcPort:44785 DestPort:9000 Seq:3048573840 Ack:767773739 Flags:32784 WindowSize:229 Checksum:14593 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:40 connection established
2022/05/20 23:23:40 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 174 241 45 193 197 139 181 181 139 144 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:23:40 checksumer: &{sum:562784 oddByte:33 length:39}
2022/05/20 23:23:40 ret:  562817
2022/05/20 23:23:40 ret:  38537
2022/05/20 23:23:40 ret:  38537
2022/05/20 23:23:40 boom packet injected
2022/05/20 23:23:40 tcp packet: &{SrcPort:44785 DestPort:9000 Seq:3048573840 Ack:767773739 Flags:32785 WindowSize:229 Checksum:14592 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:42 tcp packet: &{SrcPort:36552 DestPort:9000 Seq:1566745291 Ack:0 Flags:40962 WindowSize:29200 Checksum:40289 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:23:42 tcp packet: &{SrcPort:36552 DestPort:9000 Seq:1566745292 Ack:3246000254 Flags:32784 WindowSize:229 Checksum:14998 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:42 connection established
2022/05/20 23:23:42 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 142 200 193 120 129 222 93 98 162 204 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:23:42 checksumer: &{sum:548943 oddByte:33 length:39}
2022/05/20 23:23:42 ret:  548976
2022/05/20 23:23:42 ret:  24696
2022/05/20 23:23:42 ret:  24696
2022/05/20 23:23:42 boom packet injected
2022/05/20 23:23:42 tcp packet: &{SrcPort:36552 DestPort:9000 Seq:1566745292 Ack:3246000254 Flags:32785 WindowSize:229 Checksum:14997 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:44 tcp packet: &{SrcPort:35093 DestPort:9000 Seq:3188314702 Ack:0 Flags:40962 WindowSize:29200 Checksum:2841 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:23:44 tcp packet: &{SrcPort:35093 DestPort:9000 Seq:3188314703 Ack:1456056841 Flags:32784 WindowSize:229 Checksum:27041 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:44 connection established
2022/05/20 23:23:44 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 137 21 86 200 35 105 190 9 210 79 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:23:44 checksumer: &{sum:438802 oddByte:33 length:39}
2022/05/20 23:23:44 ret:  438835
2022/05/20 23:23:44 ret:  45625
2022/05/20 23:23:44 ret:  45625
2022/05/20 23:23:44 boom packet injected
2022/05/20 23:23:44 tcp packet: &{SrcPort:35093 DestPort:9000 Seq:3188314703 Ack:1456056841 Flags:32785 WindowSize:229 Checksum:27040 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:46 tcp packet: &{SrcPort:43663 DestPort:9000 Seq:1542329532 Ack:0 Flags:40962 WindowSize:29200 Checksum:380 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:23:46 tcp packet: &{SrcPort:43663 DestPort:9000 Seq:1542329533 Ack:1088139263 Flags:32784 WindowSize:229 Checksum:26668 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:46 connection established
2022/05/20 23:23:46 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 170 143 64 218 41 95 91 238 20 189 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:23:46 checksumer: &{sum:558594 oddByte:33 length:39}
2022/05/20 23:23:46 ret:  558627
2022/05/20 23:23:46 ret:  34347
2022/05/20 23:23:46 ret:  34347
2022/05/20 23:23:46 boom packet injected
2022/05/20 23:23:46 tcp packet: &{SrcPort:43663 DestPort:9000 Seq:1542329533 Ack:1088139263 Flags:32785 WindowSize:229 Checksum:26667 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:48 tcp packet: &{SrcPort:35392 DestPort:9000 Seq:1323787575 Ack:0 Flags:40962 WindowSize:29200 Checksum:54917 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:23:48 tcp packet: &{SrcPort:35392 DestPort:9000 Seq:1323787576 Ack:328267148 Flags:32784 WindowSize:229 Checksum:7459 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:48 connection established
2022/05/20 23:23:48 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 138 64 19 143 110 236 78 231 101 56 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:23:48 checksumer: &{sum:519486 oddByte:33 length:39}
2022/05/20 23:23:48 ret:  519519
2022/05/20 23:23:48 ret:  60774
2022/05/20 23:23:48 ret:  60774
2022/05/20 23:23:48 boom packet injected
2022/05/20 23:23:48 tcp packet: &{SrcPort:35392 DestPort:9000 Seq:1323787576 Ack:328267148 Flags:32785 WindowSize:229 Checksum:7458 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:50 tcp packet: &{SrcPort:44785 DestPort:9000 Seq:3048573841 Ack:767773740 Flags:32784 WindowSize:229 Checksum:60125 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:50 tcp packet: &{SrcPort:41059 DestPort:9000 Seq:4228651172 Ack:0 Flags:40962 WindowSize:29200 Checksum:22528 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:23:50 tcp packet: &{SrcPort:41059 DestPort:9000 Seq:4228651173 Ack:382083614 Flags:32784 WindowSize:229 Checksum:26374 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:50 connection established
2022/05/20 23:23:50 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 160 99 22 196 155 126 252 12 24 165 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:23:50 checksumer: &{sum:485861 oddByte:33 length:39}
2022/05/20 23:23:50 ret:  485894
2022/05/20 23:23:50 ret:  27149
2022/05/20 23:23:50 ret:  27149
2022/05/20 23:23:50 boom packet injected
2022/05/20 23:23:50 tcp packet: &{SrcPort:41059 DestPort:9000 Seq:4228651173 Ack:382083614 Flags:32785 WindowSize:229 Checksum:26373 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:52 tcp packet: &{SrcPort:36552 DestPort:9000 Seq:1566745293 Ack:3246000255 Flags:32784 WindowSize:229 Checksum:60530 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:52 tcp packet: &{SrcPort:33572 DestPort:9000 Seq:4053764853 Ack:0 Flags:40962 WindowSize:29200 Checksum:1418 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:23:52 tcp packet: &{SrcPort:33572 DestPort:9000 Seq:4053764854 Ack:2809813448 Flags:32784 WindowSize:229 Checksum:17504 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:52 connection established
2022/05/20 23:23:52 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 131 36 167 120 211 40 241 159 138 246 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:23:52 checksumer: &{sum:486904 oddByte:33 length:39}
2022/05/20 23:23:52 ret:  486937
2022/05/20 23:23:52 ret:  28192
2022/05/20 23:23:52 ret:  28192
2022/05/20 23:23:52 boom packet injected
2022/05/20 23:23:52 tcp packet: &{SrcPort:33572 DestPort:9000 Seq:4053764854 Ack:2809813448 Flags:32785 WindowSize:229 Checksum:17503 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:54 tcp packet: &{SrcPort:35093 DestPort:9000 Seq:3188314704 Ack:1456056842 Flags:32784 WindowSize:229 Checksum:7037 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:54 tcp packet: &{SrcPort:44803 DestPort:9000 Seq:2753313616 Ack:0 Flags:40962 WindowSize:29200 Checksum:28419 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:23:54 tcp packet: &{SrcPort:44803 DestPort:9000 Seq:2753313617 Ack:2811166379 Flags:32784 WindowSize:229 Checksum:272 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:54 connection established
2022/05/20 23:23:54 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 175 3 167 141 120 11 164 28 59 81 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:23:54 checksumer: &{sum:400429 oddByte:33 length:39}
2022/05/20 23:23:54 ret:  400462
2022/05/20 23:23:54 ret:  7252
2022/05/20 23:23:54 ret:  7252
2022/05/20 23:23:54 boom packet injected
2022/05/20 23:23:54 tcp packet: &{SrcPort:44803 DestPort:9000 Seq:2753313617 Ack:2811166379 Flags:32785 WindowSize:229 Checksum:271 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:56 tcp packet: &{SrcPort:43663 DestPort:9000 Seq:1542329534 Ack:1088139264 Flags:32784 WindowSize:229 Checksum:6665 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:56 tcp packet: &{SrcPort:42703 DestPort:9000 Seq:1634077176 Ack:0 Flags:40962 WindowSize:29200 Checksum:58228 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:23:56 tcp packet: &{SrcPort:42703 DestPort:9000 Seq:1634077177 Ack:959249979 Flags:32784 WindowSize:229 Checksum:56451 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:56 connection established
2022/05/20 23:23:56 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 166 207 57 43 119 155 97 102 9 249 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:23:56 checksumer: &{sum:526144 oddByte:33 length:39}
2022/05/20 23:23:56 ret:  526177
2022/05/20 23:23:56 ret:  1897
2022/05/20 23:23:56 ret:  1897
2022/05/20 23:23:56 boom packet injected
2022/05/20 23:23:56 tcp packet: &{SrcPort:42703 DestPort:9000 Seq:1634077177 Ack:959249979 Flags:32785 WindowSize:229 Checksum:56450 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:58 tcp packet: &{SrcPort:35392 DestPort:9000 Seq:1323787577 Ack:328267149 Flags:32784 WindowSize:229 Checksum:52991 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:58 tcp packet: &{SrcPort:46281 DestPort:9000 Seq:3311598156 Ack:0 Flags:40962 WindowSize:29200 Checksum:31027 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:23:58 tcp packet: &{SrcPort:46281 DestPort:9000 Seq:3311598157 Ack:2502809616 Flags:32784 WindowSize:229 Checksum:13429 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:23:58 connection established
2022/05/20 23:23:58 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 180 201 149 44 81 112 197 98 250 77 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:23:58 checksumer: &{sum:469209 oddByte:33 length:39}
2022/05/20 23:23:58 ret:  469242
2022/05/20 23:23:58 ret:  10497
2022/05/20 23:23:58 ret:  10497
2022/05/20 23:23:58 boom packet injected
2022/05/20 23:23:58 tcp packet: &{SrcPort:46281 DestPort:9000 Seq:3311598157 Ack:2502809616 Flags:32785 WindowSize:229 Checksum:13428 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:00 tcp packet: &{SrcPort:41059 DestPort:9000 Seq:4228651174 Ack:382083615 Flags:32784 WindowSize:229 Checksum:6371 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:00 tcp packet: &{SrcPort:46488 DestPort:9000 Seq:449510714 Ack:0 Flags:40962 WindowSize:29200 Checksum:6205 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:00 tcp packet: &{SrcPort:46488 DestPort:9000 Seq:449510715 Ack:2143630518 Flags:32784 WindowSize:229 Checksum:33905 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:00 connection established
2022/05/20 23:24:00 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 181 152 127 195 174 22 26 202 253 59 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:00 checksumer: &{sum:494201 oddByte:33 length:39}
2022/05/20 23:24:00 ret:  494234
2022/05/20 23:24:00 ret:  35489
2022/05/20 23:24:00 ret:  35489
2022/05/20 23:24:00 boom packet injected
2022/05/20 23:24:00 tcp packet: &{SrcPort:46488 DestPort:9000 Seq:449510715 Ack:2143630518 Flags:32785 WindowSize:229 Checksum:33904 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:02 tcp packet: &{SrcPort:33572 DestPort:9000 Seq:4053764855 Ack:2809813449 Flags:32784 WindowSize:229 Checksum:63036 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:02 tcp packet: &{SrcPort:39871 DestPort:9000 Seq:3805531947 Ack:0 Flags:40962 WindowSize:29200 Checksum:36939 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:02 tcp packet: &{SrcPort:39871 DestPort:9000 Seq:3805531948 Ack:1567191415 Flags:32784 WindowSize:229 Checksum:55882 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:02 connection established
2022/05/20 23:24:02 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 155 191 93 103 234 215 226 211 207 44 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:02 checksumer: &{sum:528659 oddByte:33 length:39}
2022/05/20 23:24:02 ret:  528692
2022/05/20 23:24:02 ret:  4412
2022/05/20 23:24:02 ret:  4412
2022/05/20 23:24:02 boom packet injected
2022/05/20 23:24:02 tcp packet: &{SrcPort:39871 DestPort:9000 Seq:3805531948 Ack:1567191415 Flags:32785 WindowSize:229 Checksum:55881 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:04 tcp packet: &{SrcPort:44803 DestPort:9000 Seq:2753313618 Ack:2811166380 Flags:32784 WindowSize:229 Checksum:45805 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:04 tcp packet: &{SrcPort:43262 DestPort:9000 Seq:3900875046 Ack:0 Flags:40962 WindowSize:29200 Checksum:41872 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:04 tcp packet: &{SrcPort:43262 DestPort:9000 Seq:3900875047 Ack:211586013 Flags:32784 WindowSize:229 Checksum:7205 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:04 connection established
2022/05/20 23:24:04 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 168 254 12 155 5 61 232 130 161 39 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:04 checksumer: &{sum:496322 oddByte:33 length:39}
2022/05/20 23:24:04 ret:  496355
2022/05/20 23:24:04 ret:  37610
2022/05/20 23:24:04 ret:  37610
2022/05/20 23:24:04 boom packet injected
2022/05/20 23:24:04 tcp packet: &{SrcPort:43262 DestPort:9000 Seq:3900875047 Ack:211586013 Flags:32785 WindowSize:229 Checksum:7204 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:06 tcp packet: &{SrcPort:42703 DestPort:9000 Seq:1634077178 Ack:959249980 Flags:32784 WindowSize:229 Checksum:36448 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:06 tcp packet: &{SrcPort:33226 DestPort:9000 Seq:2646343098 Ack:0 Flags:40962 WindowSize:29200 Checksum:45351 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:06 tcp packet: &{SrcPort:33226 DestPort:9000 Seq:2646343099 Ack:522947226 Flags:32784 WindowSize:229 Checksum:4256 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:06 connection established
2022/05/20 23:24:06 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 129 202 31 42 3 250 157 187 253 187 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:06 checksumer: &{sum:554941 oddByte:33 length:39}
2022/05/20 23:24:06 ret:  554974
2022/05/20 23:24:06 ret:  30694
2022/05/20 23:24:06 ret:  30694
2022/05/20 23:24:06 boom packet injected
2022/05/20 23:24:06 tcp packet: &{SrcPort:33226 DestPort:9000 Seq:2646343099 Ack:522947226 Flags:32785 WindowSize:229 Checksum:4255 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:08 tcp packet: &{SrcPort:46281 DestPort:9000 Seq:3311598158 Ack:2502809617 Flags:32784 WindowSize:229 Checksum:58962 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:08 tcp packet: &{SrcPort:40627 DestPort:9000 Seq:4291996227 Ack:0 Flags:40962 WindowSize:29200 Checksum:32204 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:08 tcp packet: &{SrcPort:40627 DestPort:9000 Seq:4291996228 Ack:1698167396 Flags:32784 WindowSize:229 Checksum:8092 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:08 connection established
2022/05/20 23:24:08 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 158 179 101 54 115 196 255 210 170 68 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:08 checksumer: &{sum:513951 oddByte:33 length:39}
2022/05/20 23:24:08 ret:  513984
2022/05/20 23:24:08 ret:  55239
2022/05/20 23:24:08 ret:  55239
2022/05/20 23:24:08 boom packet injected
2022/05/20 23:24:08 tcp packet: &{SrcPort:40627 DestPort:9000 Seq:4291996228 Ack:1698167396 Flags:32785 WindowSize:229 Checksum:8091 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:10 tcp packet: &{SrcPort:46488 DestPort:9000 Seq:449510716 Ack:2143630519 Flags:32784 WindowSize:229 Checksum:13902 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:10 tcp packet: &{SrcPort:43955 DestPort:9000 Seq:555065841 Ack:0 Flags:40962 WindowSize:29200 Checksum:20492 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:10 tcp packet: &{SrcPort:43955 DestPort:9000 Seq:555065842 Ack:1273530496 Flags:32784 WindowSize:229 Checksum:30013 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:10 connection established
2022/05/20 23:24:10 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 171 179 75 231 1 224 33 21 161 242 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:10 checksumer: &{sum:562233 oddByte:33 length:39}
2022/05/20 23:24:10 ret:  562266
2022/05/20 23:24:10 ret:  37986
2022/05/20 23:24:10 ret:  37986
2022/05/20 23:24:10 boom packet injected
2022/05/20 23:24:10 tcp packet: &{SrcPort:43955 DestPort:9000 Seq:555065842 Ack:1273530496 Flags:32785 WindowSize:229 Checksum:30012 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:12 tcp packet: &{SrcPort:39871 DestPort:9000 Seq:3805531949 Ack:1567191416 Flags:32784 WindowSize:229 Checksum:35879 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:12 tcp packet: &{SrcPort:43247 DestPort:9000 Seq:2378617761 Ack:0 Flags:40962 WindowSize:29200 Checksum:44189 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:12 tcp packet: &{SrcPort:43247 DestPort:9000 Seq:2378617762 Ack:1548229938 Flags:32784 WindowSize:229 Checksum:9452 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:12 connection established
2022/05/20 23:24:12 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 168 239 92 70 150 146 141 198 211 162 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:12 checksumer: &{sum:541562 oddByte:33 length:39}
2022/05/20 23:24:12 ret:  541595
2022/05/20 23:24:12 ret:  17315
2022/05/20 23:24:12 ret:  17315
2022/05/20 23:24:12 boom packet injected
2022/05/20 23:24:12 tcp packet: &{SrcPort:43247 DestPort:9000 Seq:2378617762 Ack:1548229938 Flags:32785 WindowSize:229 Checksum:9451 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:14 tcp packet: &{SrcPort:38347 DestPort:9000 Seq:2721490226 Ack:0 Flags:40962 WindowSize:29200 Checksum:53744 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:14 tcp packet: &{SrcPort:38347 DestPort:9000 Seq:2721490227 Ack:1177844798 Flags:32784 WindowSize:229 Checksum:63819 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:14 connection established
2022/05/20 23:24:14 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 149 203 70 50 245 158 162 54 165 51 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:14 checksumer: &{sum:465047 oddByte:33 length:39}
2022/05/20 23:24:14 ret:  465080
2022/05/20 23:24:14 ret:  6335
2022/05/20 23:24:14 ret:  6335
2022/05/20 23:24:14 boom packet injected
2022/05/20 23:24:14 tcp packet: &{SrcPort:43262 DestPort:9000 Seq:3900875048 Ack:211586014 Flags:32784 WindowSize:229 Checksum:52690 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:14 tcp packet: &{SrcPort:38347 DestPort:9000 Seq:2721490227 Ack:1177844798 Flags:32785 WindowSize:229 Checksum:63818 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:16 tcp packet: &{SrcPort:33226 DestPort:9000 Seq:2646343100 Ack:522947227 Flags:32784 WindowSize:229 Checksum:49786 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:16 tcp packet: &{SrcPort:40992 DestPort:9000 Seq:133845873 Ack:0 Flags:40962 WindowSize:29200 Checksum:43976 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:16 tcp packet: &{SrcPort:40992 DestPort:9000 Seq:133845874 Ack:1040907866 Flags:32784 WindowSize:229 Checksum:20875 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:16 connection established
2022/05/20 23:24:16 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 160 32 62 9 119 186 7 250 83 114 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:16 checksumer: &{sum:483887 oddByte:33 length:39}
2022/05/20 23:24:16 ret:  483920
2022/05/20 23:24:16 ret:  25175
2022/05/20 23:24:16 ret:  25175
2022/05/20 23:24:16 boom packet injected
2022/05/20 23:24:16 tcp packet: &{SrcPort:40992 DestPort:9000 Seq:133845874 Ack:1040907866 Flags:32785 WindowSize:229 Checksum:20874 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:18 tcp packet: &{SrcPort:40627 DestPort:9000 Seq:4291996229 Ack:1698167397 Flags:32784 WindowSize:229 Checksum:53624 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:18 tcp packet: &{SrcPort:38782 DestPort:9000 Seq:1978526395 Ack:0 Flags:40962 WindowSize:29200 Checksum:42843 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:18 tcp packet: &{SrcPort:38782 DestPort:9000 Seq:1978526396 Ack:1796588517 Flags:32784 WindowSize:229 Checksum:21176 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:18 connection established
2022/05/20 23:24:18 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 151 126 107 20 61 69 117 237 234 188 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:18 checksumer: &{sum:496670 oddByte:33 length:39}
2022/05/20 23:24:18 ret:  496703
2022/05/20 23:24:18 ret:  37958
2022/05/20 23:24:18 ret:  37958
2022/05/20 23:24:18 boom packet injected
2022/05/20 23:24:18 tcp packet: &{SrcPort:38782 DestPort:9000 Seq:1978526396 Ack:1796588517 Flags:32785 WindowSize:229 Checksum:21175 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:20 tcp packet: &{SrcPort:43955 DestPort:9000 Seq:555065843 Ack:1273530497 Flags:32784 WindowSize:229 Checksum:10011 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:20 tcp packet: &{SrcPort:34792 DestPort:9000 Seq:3164318506 Ack:0 Flags:40962 WindowSize:29200 Checksum:43005 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:20 tcp packet: &{SrcPort:34792 DestPort:9000 Seq:3164318507 Ack:328968576 Flags:32784 WindowSize:229 Checksum:48477 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:20 connection established
2022/05/20 23:24:20 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 135 232 19 154 34 224 188 155 171 43 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:20 checksumer: &{sum:539555 oddByte:33 length:39}
2022/05/20 23:24:20 ret:  539588
2022/05/20 23:24:20 ret:  15308
2022/05/20 23:24:20 ret:  15308
2022/05/20 23:24:20 boom packet injected
2022/05/20 23:24:20 tcp packet: &{SrcPort:34792 DestPort:9000 Seq:3164318507 Ack:328968576 Flags:32785 WindowSize:229 Checksum:48476 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:22 tcp packet: &{SrcPort:43247 DestPort:9000 Seq:2378617763 Ack:1548229939 Flags:32784 WindowSize:229 Checksum:54985 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:22 tcp packet: &{SrcPort:33722 DestPort:9000 Seq:2039582436 Ack:0 Flags:40962 WindowSize:29200 Checksum:943 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:22 tcp packet: &{SrcPort:33722 DestPort:9000 Seq:2039582437 Ack:4073810135 Flags:32784 WindowSize:229 Checksum:29366 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:22 connection established
2022/05/20 23:24:22 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 131 186 242 207 226 55 121 145 142 229 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:22 checksumer: &{sum:543454 oddByte:33 length:39}
2022/05/20 23:24:22 ret:  543487
2022/05/20 23:24:22 ret:  19207
2022/05/20 23:24:22 ret:  19207
2022/05/20 23:24:22 boom packet injected
2022/05/20 23:24:22 tcp packet: &{SrcPort:33722 DestPort:9000 Seq:2039582437 Ack:4073810135 Flags:32785 WindowSize:229 Checksum:29365 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:24 tcp packet: &{SrcPort:45687 DestPort:9000 Seq:2184267027 Ack:0 Flags:40962 WindowSize:29200 Checksum:3666 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:24 tcp packet: &{SrcPort:45687 DestPort:9000 Seq:2184267028 Ack:2694641916 Flags:32784 WindowSize:229 Checksum:14230 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:24 connection established
2022/05/20 23:24:24 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 178 119 160 155 114 92 130 49 69 20 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:24 checksumer: &{sum:444171 oddByte:33 length:39}
2022/05/20 23:24:24 ret:  444204
2022/05/20 23:24:24 ret:  50994
2022/05/20 23:24:24 ret:  50994
2022/05/20 23:24:24 boom packet injected
2022/05/20 23:24:24 tcp packet: &{SrcPort:45687 DestPort:9000 Seq:2184267028 Ack:2694641916 Flags:32785 WindowSize:229 Checksum:14229 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:24 tcp packet: &{SrcPort:38347 DestPort:9000 Seq:2721490228 Ack:1177844799 Flags:32784 WindowSize:229 Checksum:43772 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:26 tcp packet: &{SrcPort:40992 DestPort:9000 Seq:133845875 Ack:1040907867 Flags:32784 WindowSize:229 Checksum:873 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:26 tcp packet: &{SrcPort:36540 DestPort:9000 Seq:1314946302 Ack:0 Flags:40962 WindowSize:29200 Checksum:9735 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:26 tcp packet: &{SrcPort:36540 DestPort:9000 Seq:1314946303 Ack:2097202297 Flags:32784 WindowSize:229 Checksum:40830 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:26 connection established
2022/05/20 23:24:26 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 142 188 124 255 61 217 78 96 124 255 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:26 checksumer: &{sum:591505 oddByte:33 length:39}
2022/05/20 23:24:26 ret:  591538
2022/05/20 23:24:26 ret:  1723
2022/05/20 23:24:26 ret:  1723
2022/05/20 23:24:26 boom packet injected
2022/05/20 23:24:26 tcp packet: &{SrcPort:36540 DestPort:9000 Seq:1314946303 Ack:2097202297 Flags:32785 WindowSize:229 Checksum:40829 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:28 tcp packet: &{SrcPort:38782 DestPort:9000 Seq:1978526397 Ack:1796588518 Flags:32784 WindowSize:229 Checksum:1173 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:28 tcp packet: &{SrcPort:35659 DestPort:9000 Seq:1082005961 Ack:0 Flags:40962 WindowSize:29200 Checksum:37566 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:28 tcp packet: &{SrcPort:35659 DestPort:9000 Seq:1082005962 Ack:2039935938 Flags:32784 WindowSize:229 Checksum:55427 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:28 connection established
2022/05/20 23:24:28 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 139 75 121 149 109 34 64 126 25 202 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:28 checksumer: &{sum:482634 oddByte:33 length:39}
2022/05/20 23:24:28 ret:  482667
2022/05/20 23:24:28 ret:  23922
2022/05/20 23:24:28 ret:  23922
2022/05/20 23:24:28 boom packet injected
2022/05/20 23:24:28 tcp packet: &{SrcPort:35659 DestPort:9000 Seq:1082005962 Ack:2039935938 Flags:32785 WindowSize:229 Checksum:55426 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:30 tcp packet: &{SrcPort:34792 DestPort:9000 Seq:3164318508 Ack:328968577 Flags:32784 WindowSize:229 Checksum:28471 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:30 tcp packet: &{SrcPort:39340 DestPort:9000 Seq:3108758751 Ack:0 Flags:40962 WindowSize:29200 Checksum:14506 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:30 tcp packet: &{SrcPort:39340 DestPort:9000 Seq:3108758752 Ack:1345333694 Flags:32784 WindowSize:229 Checksum:27153 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:30 connection established
2022/05/20 23:24:30 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 153 172 80 46 163 30 185 75 228 224 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:30 checksumer: &{sum:473001 oddByte:33 length:39}
2022/05/20 23:24:30 ret:  473034
2022/05/20 23:24:30 ret:  14289
2022/05/20 23:24:30 ret:  14289
2022/05/20 23:24:30 boom packet injected
2022/05/20 23:24:30 tcp packet: &{SrcPort:39340 DestPort:9000 Seq:3108758752 Ack:1345333694 Flags:32785 WindowSize:229 Checksum:27152 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:32 tcp packet: &{SrcPort:33722 DestPort:9000 Seq:2039582438 Ack:4073810136 Flags:32784 WindowSize:229 Checksum:9361 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:32 tcp packet: &{SrcPort:33532 DestPort:9000 Seq:679312496 Ack:0 Flags:40962 WindowSize:29200 Checksum:16583 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:32 tcp packet: &{SrcPort:33532 DestPort:9000 Seq:679312497 Ack:1173445718 Flags:32784 WindowSize:229 Checksum:16902 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:32 connection established
2022/05/20 23:24:32 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 130 252 69 239 213 182 40 125 124 113 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:32 checksumer: &{sum:565952 oddByte:33 length:39}
2022/05/20 23:24:32 ret:  565985
2022/05/20 23:24:32 ret:  41705
2022/05/20 23:24:32 ret:  41705
2022/05/20 23:24:32 boom packet injected
2022/05/20 23:24:32 tcp packet: &{SrcPort:33532 DestPort:9000 Seq:679312497 Ack:1173445718 Flags:32785 WindowSize:229 Checksum:16901 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:34 tcp packet: &{SrcPort:45687 DestPort:9000 Seq:2184267029 Ack:2694641917 Flags:32784 WindowSize:229 Checksum:59757 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:34 tcp packet: &{SrcPort:32973 DestPort:9000 Seq:4027897911 Ack:0 Flags:40962 WindowSize:29200 Checksum:6086 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:34 tcp packet: &{SrcPort:32973 DestPort:9000 Seq:4027897912 Ack:1378499973 Flags:32784 WindowSize:229 Checksum:9162 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:34 connection established
2022/05/20 23:24:34 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 128 205 82 40 182 229 240 20 216 56 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:34 checksumer: &{sum:473808 oddByte:33 length:39}
2022/05/20 23:24:34 ret:  473841
2022/05/20 23:24:34 ret:  15096
2022/05/20 23:24:34 ret:  15096
2022/05/20 23:24:34 boom packet injected
2022/05/20 23:24:34 tcp packet: &{SrcPort:32973 DestPort:9000 Seq:4027897912 Ack:1378499973 Flags:32785 WindowSize:229 Checksum:9161 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:36 tcp packet: &{SrcPort:43283 DestPort:9000 Seq:1649184348 Ack:0 Flags:40962 WindowSize:29200 Checksum:48978 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:36 tcp packet: &{SrcPort:43283 DestPort:9000 Seq:1649184349 Ack:828828987 Flags:32784 WindowSize:229 Checksum:13460 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:36 connection established
2022/05/20 23:24:36 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 169 19 49 101 102 155 98 76 142 93 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:36 checksumer: &{sum:446384 oddByte:33 length:39}
2022/05/20 23:24:36 ret:  446417
2022/05/20 23:24:36 ret:  53207
2022/05/20 23:24:36 ret:  53207
2022/05/20 23:24:36 boom packet injected
2022/05/20 23:24:36 tcp packet: &{SrcPort:43283 DestPort:9000 Seq:1649184349 Ack:828828987 Flags:32785 WindowSize:229 Checksum:13458 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:36 tcp packet: &{SrcPort:36540 DestPort:9000 Seq:1314946304 Ack:2097202298 Flags:32784 WindowSize:229 Checksum:20822 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:38 tcp packet: &{SrcPort:42977 DestPort:9000 Seq:2658159286 Ack:0 Flags:40962 WindowSize:29200 Checksum:49205 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:38 tcp packet: &{SrcPort:42977 DestPort:9000 Seq:2658159287 Ack:2151443924 Flags:32784 WindowSize:229 Checksum:24120 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:38 connection established
2022/05/20 23:24:38 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 167 225 128 58 231 52 158 112 74 183 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:38 checksumer: &{sum:494198 oddByte:33 length:39}
2022/05/20 23:24:38 ret:  494231
2022/05/20 23:24:38 ret:  35486
2022/05/20 23:24:38 ret:  35486
2022/05/20 23:24:38 boom packet injected
2022/05/20 23:24:38 tcp packet: &{SrcPort:42977 DestPort:9000 Seq:2658159287 Ack:2151443924 Flags:32785 WindowSize:229 Checksum:24119 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:38 tcp packet: &{SrcPort:35659 DestPort:9000 Seq:1082005963 Ack:2039935939 Flags:32784 WindowSize:229 Checksum:35417 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:40 tcp packet: &{SrcPort:39340 DestPort:9000 Seq:3108758753 Ack:1345333695 Flags:32784 WindowSize:229 Checksum:7150 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:40 tcp packet: &{SrcPort:35517 DestPort:9000 Seq:4035153196 Ack:0 Flags:40962 WindowSize:29200 Checksum:16640 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:40 tcp packet: &{SrcPort:35517 DestPort:9000 Seq:4035153197 Ack:2594370319 Flags:32784 WindowSize:229 Checksum:14223 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:40 connection established
2022/05/20 23:24:40 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 138 189 154 161 108 111 240 131 141 45 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:40 checksumer: &{sum:496013 oddByte:33 length:39}
2022/05/20 23:24:40 ret:  496046
2022/05/20 23:24:40 ret:  37301
2022/05/20 23:24:40 ret:  37301
2022/05/20 23:24:40 boom packet injected
2022/05/20 23:24:40 tcp packet: &{SrcPort:35517 DestPort:9000 Seq:4035153197 Ack:2594370319 Flags:32785 WindowSize:229 Checksum:14222 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:42 tcp packet: &{SrcPort:33532 DestPort:9000 Seq:679312498 Ack:1173445719 Flags:32784 WindowSize:229 Checksum:62434 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:42 tcp packet: &{SrcPort:46498 DestPort:9000 Seq:3129374510 Ack:0 Flags:40962 WindowSize:29200 Checksum:23109 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.200
2022/05/20 23:24:42 tcp packet: &{SrcPort:46498 DestPort:9000 Seq:3129374511 Ack:2663164077 Flags:32784 WindowSize:229 Checksum:36685 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.200
2022/05/20 23:24:42 connection established
2022/05/20 23:24:42 calling checksumTCP: 10.244.3.29 10.244.4.200 [35 40 181 162 158 187 34 13 186 134 119 47 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/20 23:24:42 checksumer: &{sum:471846 oddByte:33 length:39}
2022/05/20 23:24:42 ret:  471879
2022/05/20 23:24:42 ret:  13134
2022/05/20 23:24:42 ret:  13134
2022/05/20 23:24:42 boom packet injected
2022/05/20 23:24:42 tcp packet: &{SrcPort:46498 DestPort:9000 Seq:3129374511 Ack:2663164077 Flags:32785 WindowSize:229 Checksum:36684 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.200

May 20 23:24:43.398: INFO: boom-server OK: did not receive any RST packet
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:24:43.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-382" for this suite.


• [SLOW TEST:76.192 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":2,"skipped":609,"failed":0}
May 20 23:24:43.411: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:24:09.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename network-perf
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
May 20 23:24:09.488: INFO: deploying iperf2 server
May 20 23:24:09.492: INFO: Waiting for deployment "iperf2-server-deployment" to complete
May 20 23:24:09.495: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
May 20 23:24:11.498: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788685849, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788685849, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788685849, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788685849, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 20 23:24:13.506: INFO: waiting for iperf2 server endpoints
May 20 23:24:15.510: INFO: found iperf2 server endpoints
May 20 23:24:15.510: INFO: waiting for client pods to be running
May 20 23:24:19.517: INFO: all client pods are ready: 2 pods
May 20 23:24:19.519: INFO: server pod phase Running
May 20 23:24:19.519: INFO: server pod condition 0: {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-20 23:24:09 +0000 UTC Reason: Message:}
May 20 23:24:19.519: INFO: server pod condition 1: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-20 23:24:13 +0000 UTC Reason: Message:}
May 20 23:24:19.519: INFO: server pod condition 2: {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-20 23:24:13 +0000 UTC Reason: Message:}
May 20 23:24:19.519: INFO: server pod condition 3: {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-20 23:24:09 +0000 UTC Reason: Message:}
May 20 23:24:19.519: INFO: server pod container status 0: {Name:iperf2-server State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2022-05-20 23:24:12 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://d0c2502cd3da6e855d6676cfb9e7f485ab1d772f0be0d449171cf0093a48de12 Started:0xc0038e041b}
May 20 23:24:19.519: INFO: found 2 matching client pods
May 20 23:24:19.522: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-5754 PodName:iperf2-clients-bqjc8 ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:19.522: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:19.611: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
May 20 23:24:19.611: INFO: iperf version: 
May 20 23:24:19.611: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-bqjc8 (node node1)
May 20 23:24:19.614: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-5754 PodName:iperf2-clients-bqjc8 ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:19.615: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:34.740: INFO: Exec stderr: ""
May 20 23:24:34.740: INFO: output from exec on client pod iperf2-clients-bqjc8 (node node1): 
20220520232420.717,10.244.4.215,48762,10.233.8.255,6789,3,0.0-1.0,75759616,606076928
20220520232421.723,10.244.4.215,48762,10.233.8.255,6789,3,1.0-2.0,116391936,931135488
20220520232422.709,10.244.4.215,48762,10.233.8.255,6789,3,2.0-3.0,115605504,924844032
20220520232423.717,10.244.4.215,48762,10.233.8.255,6789,3,3.0-4.0,116654080,933232640
20220520232424.706,10.244.4.215,48762,10.233.8.255,6789,3,4.0-5.0,117178368,937426944
20220520232425.720,10.244.4.215,48762,10.233.8.255,6789,3,5.0-6.0,116129792,929038336
20220520232426.713,10.244.4.215,48762,10.233.8.255,6789,3,6.0-7.0,115736576,925892608
20220520232427.701,10.244.4.215,48762,10.233.8.255,6789,3,7.0-8.0,118095872,944766976
20220520232428.720,10.244.4.215,48762,10.233.8.255,6789,3,8.0-9.0,117702656,941621248
20220520232429.707,10.244.4.215,48762,10.233.8.255,6789,3,9.0-10.0,106561536,852492288
20220520232429.707,10.244.4.215,48762,10.233.8.255,6789,3,0.0-10.0,1115815936,892520209

May 20 23:24:34.743: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-5754 PodName:iperf2-clients-ql5nt ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:34.743: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:34.834: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
May 20 23:24:34.834: INFO: iperf version: 
May 20 23:24:34.834: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-ql5nt (node node2)
May 20 23:24:34.837: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-5754 PodName:iperf2-clients-ql5nt ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May 20 23:24:34.837: INFO: >>> kubeConfig: /root/.kube/config
May 20 23:24:49.961: INFO: Exec stderr: ""
May 20 23:24:49.961: INFO: output from exec on client pod iperf2-clients-ql5nt (node node2): 
20220520232435.939,10.244.3.44,54600,10.233.8.255,6789,3,0.0-1.0,3363307520,26906460160
20220520232436.927,10.244.3.44,54600,10.233.8.255,6789,3,1.0-2.0,3429367808,27434942464
20220520232437.934,10.244.3.44,54600,10.233.8.255,6789,3,2.0-3.0,3449946112,27599568896
20220520232438.920,10.244.3.44,54600,10.233.8.255,6789,3,3.0-4.0,3420848128,27366785024
20220520232439.926,10.244.3.44,54600,10.233.8.255,6789,3,4.0-5.0,3366060032,26928480256
20220520232440.933,10.244.3.44,54600,10.233.8.255,6789,3,5.0-6.0,3460038656,27680309248
20220520232441.921,10.244.3.44,54600,10.233.8.255,6789,3,6.0-7.0,3492806656,27942453248
20220520232442.930,10.244.3.44,54600,10.233.8.255,6789,3,7.0-8.0,3515744256,28125954048
20220520232443.938,10.244.3.44,54600,10.233.8.255,6789,3,8.0-9.0,3371040768,26968326144
20220520232444.925,10.244.3.44,54600,10.233.8.255,6789,3,9.0-10.0,3387949056,27103592448
20220520232444.925,10.244.3.44,54600,10.233.8.255,6789,3,0.0-10.0,34257108992,27405640604

May 20 23:24:49.961: INFO:                                From                                 To    Bandwidth (MB/s)
May 20 23:24:49.961: INFO:                               node1                              node2                 106
May 20 23:24:49.961: INFO:                               node2                              node2                3267
[AfterEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:24:49.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "network-perf-5754" for this suite.


• [SLOW TEST:40.510 seconds]
[sig-network] Networking IPerf2 [Feature:Networking-Performance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
------------------------------
{"msg":"PASSED [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2","total":-1,"completed":2,"skipped":189,"failed":0}
May 20 23:24:49.973: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:46.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
STEP: creating a UDP service svc-udp with type=NodePort in conntrack-2943
STEP: creating a client pod for probing the service svc-udp
May 20 23:23:46.482: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:48.485: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:50.485: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:52.487: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:54.485: INFO: The status of Pod pod-client is Running (Ready = true)
May 20 23:23:54.843: INFO: Pod client logs: Fri May 20 23:23:50 UTC 2022
Fri May 20 23:23:50 UTC 2022 Try: 1

Fri May 20 23:23:50 UTC 2022 Try: 2

Fri May 20 23:23:50 UTC 2022 Try: 3

Fri May 20 23:23:50 UTC 2022 Try: 4

Fri May 20 23:23:50 UTC 2022 Try: 5

Fri May 20 23:23:50 UTC 2022 Try: 6

Fri May 20 23:23:50 UTC 2022 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
May 20 23:23:54.856: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:56.861: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:58.859: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:00.861: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-2943 to expose endpoints map[pod-server-1:[80]]
May 20 23:24:00.872: INFO: successfully validated that service svc-udp in namespace conntrack-2943 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
May 20 23:25:00.906: INFO: Pod client logs: Fri May 20 23:23:50 UTC 2022
Fri May 20 23:23:50 UTC 2022 Try: 1

Fri May 20 23:23:50 UTC 2022 Try: 2

Fri May 20 23:23:50 UTC 2022 Try: 3

Fri May 20 23:23:50 UTC 2022 Try: 4

Fri May 20 23:23:50 UTC 2022 Try: 5

Fri May 20 23:23:50 UTC 2022 Try: 6

Fri May 20 23:23:50 UTC 2022 Try: 7

Fri May 20 23:23:55 UTC 2022 Try: 8

Fri May 20 23:23:55 UTC 2022 Try: 9

Fri May 20 23:23:55 UTC 2022 Try: 10

Fri May 20 23:23:55 UTC 2022 Try: 11

Fri May 20 23:23:55 UTC 2022 Try: 12

Fri May 20 23:23:55 UTC 2022 Try: 13

Fri May 20 23:24:00 UTC 2022 Try: 14

Fri May 20 23:24:00 UTC 2022 Try: 15

Fri May 20 23:24:00 UTC 2022 Try: 16

Fri May 20 23:24:00 UTC 2022 Try: 17

Fri May 20 23:24:00 UTC 2022 Try: 18

Fri May 20 23:24:00 UTC 2022 Try: 19

Fri May 20 23:24:05 UTC 2022 Try: 20

Fri May 20 23:24:05 UTC 2022 Try: 21

Fri May 20 23:24:05 UTC 2022 Try: 22

Fri May 20 23:24:05 UTC 2022 Try: 23

Fri May 20 23:24:05 UTC 2022 Try: 24

Fri May 20 23:24:05 UTC 2022 Try: 25

Fri May 20 23:24:10 UTC 2022 Try: 26

Fri May 20 23:24:10 UTC 2022 Try: 27

Fri May 20 23:24:10 UTC 2022 Try: 28

Fri May 20 23:24:10 UTC 2022 Try: 29

Fri May 20 23:24:10 UTC 2022 Try: 30

Fri May 20 23:24:10 UTC 2022 Try: 31

Fri May 20 23:24:15 UTC 2022 Try: 32

Fri May 20 23:24:15 UTC 2022 Try: 33

Fri May 20 23:24:15 UTC 2022 Try: 34

Fri May 20 23:24:15 UTC 2022 Try: 35

Fri May 20 23:24:15 UTC 2022 Try: 36

Fri May 20 23:24:15 UTC 2022 Try: 37

Fri May 20 23:24:20 UTC 2022 Try: 38

Fri May 20 23:24:20 UTC 2022 Try: 39

Fri May 20 23:24:20 UTC 2022 Try: 40

Fri May 20 23:24:20 UTC 2022 Try: 41

Fri May 20 23:24:20 UTC 2022 Try: 42

Fri May 20 23:24:20 UTC 2022 Try: 43

Fri May 20 23:24:25 UTC 2022 Try: 44

Fri May 20 23:24:25 UTC 2022 Try: 45

Fri May 20 23:24:25 UTC 2022 Try: 46

Fri May 20 23:24:25 UTC 2022 Try: 47

Fri May 20 23:24:25 UTC 2022 Try: 48

Fri May 20 23:24:25 UTC 2022 Try: 49

Fri May 20 23:24:30 UTC 2022 Try: 50

Fri May 20 23:24:30 UTC 2022 Try: 51

Fri May 20 23:24:30 UTC 2022 Try: 52

Fri May 20 23:24:30 UTC 2022 Try: 53

Fri May 20 23:24:30 UTC 2022 Try: 54

Fri May 20 23:24:30 UTC 2022 Try: 55

Fri May 20 23:24:35 UTC 2022 Try: 56

Fri May 20 23:24:35 UTC 2022 Try: 57

Fri May 20 23:24:35 UTC 2022 Try: 58

Fri May 20 23:24:35 UTC 2022 Try: 59

Fri May 20 23:24:35 UTC 2022 Try: 60

Fri May 20 23:24:35 UTC 2022 Try: 61

Fri May 20 23:24:40 UTC 2022 Try: 62

Fri May 20 23:24:40 UTC 2022 Try: 63

Fri May 20 23:24:40 UTC 2022 Try: 64

Fri May 20 23:24:40 UTC 2022 Try: 65

Fri May 20 23:24:40 UTC 2022 Try: 66

Fri May 20 23:24:40 UTC 2022 Try: 67

Fri May 20 23:24:45 UTC 2022 Try: 68

Fri May 20 23:24:45 UTC 2022 Try: 69

Fri May 20 23:24:45 UTC 2022 Try: 70

Fri May 20 23:24:45 UTC 2022 Try: 71

Fri May 20 23:24:45 UTC 2022 Try: 72

Fri May 20 23:24:45 UTC 2022 Try: 73

Fri May 20 23:24:50 UTC 2022 Try: 74

Fri May 20 23:24:50 UTC 2022 Try: 75

Fri May 20 23:24:50 UTC 2022 Try: 76

Fri May 20 23:24:50 UTC 2022 Try: 77

Fri May 20 23:24:50 UTC 2022 Try: 78

Fri May 20 23:24:50 UTC 2022 Try: 79

Fri May 20 23:24:55 UTC 2022 Try: 80

Fri May 20 23:24:55 UTC 2022 Try: 81

Fri May 20 23:24:55 UTC 2022 Try: 82

Fri May 20 23:24:55 UTC 2022 Try: 83

Fri May 20 23:24:55 UTC 2022 Try: 84

Fri May 20 23:24:55 UTC 2022 Try: 85

Fri May 20 23:25:00 UTC 2022 Try: 86

Fri May 20 23:25:00 UTC 2022 Try: 87

Fri May 20 23:25:00 UTC 2022 Try: 88

Fri May 20 23:25:00 UTC 2022 Try: 89

Fri May 20 23:25:00 UTC 2022 Try: 90

Fri May 20 23:25:00 UTC 2022 Try: 91

May 20 23:25:00.906: FAIL: Failed to connect to backend 1

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0014faa80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc0014faa80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc0014faa80, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "conntrack-2943".
STEP: Found 8 events.
May 20 23:25:00.910: INFO: At 2022-05-20 23:23:49 +0000 UTC - event for pod-client: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 20 23:25:00.910: INFO: At 2022-05-20 23:23:49 +0000 UTC - event for pod-client: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 322.403864ms
May 20 23:25:00.910: INFO: At 2022-05-20 23:23:49 +0000 UTC - event for pod-client: {kubelet node1} Created: Created container pod-client
May 20 23:25:00.910: INFO: At 2022-05-20 23:23:50 +0000 UTC - event for pod-client: {kubelet node1} Started: Started container pod-client
May 20 23:25:00.910: INFO: At 2022-05-20 23:23:56 +0000 UTC - event for pod-server-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 20 23:25:00.910: INFO: At 2022-05-20 23:23:57 +0000 UTC - event for pod-server-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 347.236026ms
May 20 23:25:00.910: INFO: At 2022-05-20 23:23:57 +0000 UTC - event for pod-server-1: {kubelet node2} Created: Created container agnhost-container
May 20 23:25:00.910: INFO: At 2022-05-20 23:23:57 +0000 UTC - event for pod-server-1: {kubelet node2} Started: Started container agnhost-container
May 20 23:25:00.913: INFO: POD           NODE   PHASE    GRACE  CONDITIONS
May 20 23:25:00.913: INFO: pod-client    node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:46 +0000 UTC  }]
May 20 23:25:00.913: INFO: pod-server-1  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:23:54 +0000 UTC  }]
May 20 23:25:00.913: INFO: 
May 20 23:25:00.918: INFO: 
Logging node info for node master1
May 20 23:25:00.921: INFO: Node Info: &Node{ObjectMeta:{master1    b016dcf2-74b7-4456-916a-8ca363b9ccc3 76137 0 2022-05-20 20:01:28 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-20 20:01:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-20 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-20 20:09:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-20 20:12:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:07 +0000 UTC,LastTransitionTime:2022-05-20 20:07:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 23:25:00 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 23:25:00 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 23:25:00 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 23:25:00 +0000 UTC,LastTransitionTime:2022-05-20 20:04:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e9847a94929d4465bdf672fd6e82b77d,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a01e5bd5-a73c-4ab6-b80a-cab509b05bc6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 20 23:25:00.922: INFO: 
Logging kubelet events for node master1
May 20 23:25:00.925: INFO: 
Logging pods the kubelet thinks is on node master1
May 20 23:25:00.947: INFO: node-exporter-4rvrg started at 2022-05-20 20:17:21 +0000 UTC (0+2 container statuses recorded)
May 20 23:25:00.947: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:25:00.947: INFO: 	Container node-exporter ready: true, restart count 0
May 20 23:25:00.947: INFO: kube-scheduler-master1 started at 2022-05-20 20:20:27 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:00.947: INFO: 	Container kube-scheduler ready: true, restart count 1
May 20 23:25:00.947: INFO: kube-apiserver-master1 started at 2022-05-20 20:02:32 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:00.947: INFO: 	Container kube-apiserver ready: true, restart count 0
May 20 23:25:00.947: INFO: kube-controller-manager-master1 started at 2022-05-20 20:10:37 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:00.947: INFO: 	Container kube-controller-manager ready: true, restart count 3
May 20 23:25:00.947: INFO: kube-proxy-rgxh2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:00.947: INFO: 	Container kube-proxy ready: true, restart count 2
May 20 23:25:00.947: INFO: kube-flannel-tzq8g started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded)
May 20 23:25:00.947: INFO: 	Init container install-cni ready: true, restart count 2
May 20 23:25:00.947: INFO: 	Container kube-flannel ready: true, restart count 1
May 20 23:25:00.947: INFO: node-feature-discovery-controller-cff799f9f-nq7tc started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:00.947: INFO: 	Container nfd-controller ready: true, restart count 0
May 20 23:25:00.947: INFO: kube-multus-ds-amd64-k8cb6 started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:00.947: INFO: 	Container kube-multus ready: true, restart count 1
May 20 23:25:00.947: INFO: container-registry-65d7c44b96-n94w5 started at 2022-05-20 20:08:47 +0000 UTC (0+2 container statuses recorded)
May 20 23:25:00.947: INFO: 	Container docker-registry ready: true, restart count 0
May 20 23:25:00.947: INFO: 	Container nginx ready: true, restart count 0
May 20 23:25:00.947: INFO: prometheus-operator-585ccfb458-bl62n started at 2022-05-20 20:17:13 +0000 UTC (0+2 container statuses recorded)
May 20 23:25:00.947: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:25:00.947: INFO: 	Container prometheus-operator ready: true, restart count 0
May 20 23:25:01.036: INFO: 
Latency metrics for node master1
May 20 23:25:01.036: INFO: 
Logging node info for node master2
May 20 23:25:01.039: INFO: Node Info: &Node{ObjectMeta:{master2    ddc04b08-e43a-4e18-a612-aa3bf7f8411e 76132 0 2022-05-20 20:01:56 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-20 20:01:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:59 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:59 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:59 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 23:24:59 +0000 UTC,LastTransitionTime:2022-05-20 20:04:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:63d829bfe81540169bcb84ee465e884a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fc4aead3-0f07-477a-9f91-3902c50ddf48,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 20 23:25:01.039: INFO: 
Logging kubelet events for node master2
May 20 23:25:01.042: INFO: 
Logging pods the kubelet thinks is on node master2
May 20 23:25:01.050: INFO: kube-multus-ds-amd64-97fkc started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.050: INFO: 	Container kube-multus ready: true, restart count 1
May 20 23:25:01.050: INFO: kube-scheduler-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.050: INFO: 	Container kube-scheduler ready: true, restart count 3
May 20 23:25:01.050: INFO: kube-controller-manager-master2 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.050: INFO: 	Container kube-controller-manager ready: true, restart count 2
May 20 23:25:01.050: INFO: kube-proxy-wfzg2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.050: INFO: 	Container kube-proxy ready: true, restart count 1
May 20 23:25:01.050: INFO: kube-flannel-wj7hl started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded)
May 20 23:25:01.050: INFO: 	Init container install-cni ready: true, restart count 2
May 20 23:25:01.050: INFO: 	Container kube-flannel ready: true, restart count 1
May 20 23:25:01.050: INFO: coredns-8474476ff8-tjnfw started at 2022-05-20 20:04:46 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.050: INFO: 	Container coredns ready: true, restart count 1
May 20 23:25:01.050: INFO: dns-autoscaler-7df78bfcfb-5qj9t started at 2022-05-20 20:04:48 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.050: INFO: 	Container autoscaler ready: true, restart count 1
May 20 23:25:01.050: INFO: node-exporter-jfg4p started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded)
May 20 23:25:01.050: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:25:01.050: INFO: 	Container node-exporter ready: true, restart count 0
May 20 23:25:01.050: INFO: kube-apiserver-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.051: INFO: 	Container kube-apiserver ready: true, restart count 0
May 20 23:25:01.141: INFO: 
Latency metrics for node master2
May 20 23:25:01.141: INFO: 
Logging node info for node master3
May 20 23:25:01.143: INFO: Node Info: &Node{ObjectMeta:{master3    f42c1bd6-d828-4857-9180-56c73dcc370f 76118 0 2022-05-20 20:02:05 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-20 20:02:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:09 +0000 UTC,LastTransitionTime:2022-05-20 20:07:09 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:57 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:57 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:57 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 23:24:57 +0000 UTC,LastTransitionTime:2022-05-20 20:04:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6a2131d65a6f41c3b857ed7d5f7d9f9f,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fa6d1c6-058c-482a-97f3-d7e9e817b36a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 20 23:25:01.144: INFO: 
Logging kubelet events for node master3
May 20 23:25:01.146: INFO: 
Logging pods the kubelet thinks is on node master3
May 20 23:25:01.155: INFO: coredns-8474476ff8-4szxh started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.155: INFO: 	Container coredns ready: true, restart count 1
May 20 23:25:01.155: INFO: node-exporter-zgxkr started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded)
May 20 23:25:01.155: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:25:01.155: INFO: 	Container node-exporter ready: true, restart count 0
May 20 23:25:01.155: INFO: kube-apiserver-master3 started at 2022-05-20 20:02:05 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.155: INFO: 	Container kube-apiserver ready: true, restart count 0
May 20 23:25:01.155: INFO: kube-multus-ds-amd64-ch8bd started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.155: INFO: 	Container kube-multus ready: true, restart count 1
May 20 23:25:01.155: INFO: kube-proxy-rsqzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.155: INFO: 	Container kube-proxy ready: true, restart count 2
May 20 23:25:01.155: INFO: kube-flannel-bwb5w started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded)
May 20 23:25:01.155: INFO: 	Init container install-cni ready: true, restart count 0
May 20 23:25:01.155: INFO: 	Container kube-flannel ready: true, restart count 2
May 20 23:25:01.155: INFO: kube-controller-manager-master3 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.155: INFO: 	Container kube-controller-manager ready: true, restart count 1
May 20 23:25:01.155: INFO: kube-scheduler-master3 started at 2022-05-20 20:02:33 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.155: INFO: 	Container kube-scheduler ready: true, restart count 2
May 20 23:25:01.240: INFO: 
Latency metrics for node master3
May 20 23:25:01.240: INFO: 
Logging node info for node node1
May 20 23:25:01.243: INFO: Node Info: &Node{ObjectMeta:{node1    65c381dd-b6f5-4e67-a327-7a45366d15af 76112 0 2022-05-20 20:03:10 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-20 22:31:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-05-20 22:57:29 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:56 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:56 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:56 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 23:24:56 +0000 UTC,LastTransitionTime:2022-05-20 20:04:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f2f0a31e38e446cda6cf4c679d8a2ef5,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:c988afd2-8149-4515-9a6f-832552c2ed2d,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977757,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 20 23:25:01.244: INFO: 
Logging kubelet events for node node1
May 20 23:25:01.248: INFO: 
Logging pods the kubelet thinks is on node node1
May 20 23:25:01.266: INFO: kube-multus-ds-amd64-krd6m started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container kube-multus ready: true, restart count 1
May 20 23:25:01.266: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container kubernetes-dashboard ready: true, restart count 2
May 20 23:25:01.266: INFO: service-proxy-disabled-pmwdl started at 2022-05-20 23:24:26 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container service-proxy-disabled ready: true, restart count 0
May 20 23:25:01.266: INFO: verify-service-down-host-exec-pod started at  (0+0 container statuses recorded)
May 20 23:25:01.266: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container kube-sriovdp ready: true, restart count 0
May 20 23:25:01.266: INFO: node-exporter-czwvh started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:25:01.266: INFO: 	Container node-exporter ready: true, restart count 0
May 20 23:25:01.266: INFO: startup-script started at 2022-05-20 23:23:35 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container startup-script ready: true, restart count 0
May 20 23:25:01.266: INFO: nginx-proxy-node1 started at 2022-05-20 20:06:57 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container nginx-proxy ready: true, restart count 2
May 20 23:25:01.266: INFO: prometheus-k8s-0 started at 2022-05-20 20:17:30 +0000 UTC (0+4 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container config-reloader ready: true, restart count 0
May 20 23:25:01.266: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
May 20 23:25:01.266: INFO: 	Container grafana ready: true, restart count 0
May 20 23:25:01.266: INFO: 	Container prometheus ready: true, restart count 1
May 20 23:25:01.266: INFO: up-down-2-whrhh started at 2022-05-20 23:23:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container up-down-2 ready: true, restart count 0
May 20 23:25:01.266: INFO: iperf2-clients-bqjc8 started at 2022-05-20 23:24:13 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container iperf2-client ready: false, restart count 0
May 20 23:25:01.266: INFO: collectd-875j8 started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container collectd ready: true, restart count 0
May 20 23:25:01.266: INFO: 	Container collectd-exporter ready: true, restart count 0
May 20 23:25:01.266: INFO: 	Container rbac-proxy ready: true, restart count 0
May 20 23:25:01.266: INFO: node-feature-discovery-worker-rh55h started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container nfd-worker ready: true, restart count 0
May 20 23:25:01.266: INFO: cmk-init-discover-node1-vkzkd started at 2022-05-20 20:15:33 +0000 UTC (0+3 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container discover ready: false, restart count 0
May 20 23:25:01.266: INFO: 	Container init ready: false, restart count 0
May 20 23:25:01.266: INFO: 	Container install ready: false, restart count 0
May 20 23:25:01.266: INFO: service-proxy-disabled-9hsfn started at 2022-05-20 23:24:26 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container service-proxy-disabled ready: true, restart count 0
May 20 23:25:01.266: INFO: kube-flannel-2blt7 started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Init container install-cni ready: true, restart count 2
May 20 23:25:01.266: INFO: 	Container kube-flannel ready: true, restart count 3
May 20 23:25:01.266: INFO: up-down-3-txdwc started at 2022-05-20 23:24:30 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container up-down-3 ready: true, restart count 0
May 20 23:25:01.266: INFO: pod-client started at 2022-05-20 23:23:46 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container pod-client ready: true, restart count 0
May 20 23:25:01.266: INFO: kube-proxy-v8kzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container kube-proxy ready: true, restart count 2
May 20 23:25:01.266: INFO: cmk-c5x47 started at 2022-05-20 20:16:15 +0000 UTC (0+2 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container nodereport ready: true, restart count 0
May 20 23:25:01.266: INFO: 	Container reconcile ready: true, restart count 0
May 20 23:25:01.266: INFO: up-down-2-6pjb4 started at 2022-05-20 23:23:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container up-down-2 ready: true, restart count 0
May 20 23:25:01.266: INFO: service-proxy-disabled-bd8ld started at 2022-05-20 23:24:26 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container service-proxy-disabled ready: true, restart count 0
May 20 23:25:01.266: INFO: up-down-3-s8wr5 started at 2022-05-20 23:24:30 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.266: INFO: 	Container up-down-3 ready: true, restart count 0
May 20 23:25:01.581: INFO: 
Latency metrics for node node1
May 20 23:25:01.581: INFO: 
Logging node info for node node2
May 20 23:25:01.584: INFO: Node Info: &Node{ObjectMeta:{node2    a0e0a426-876d-4419-96e4-c6977ef3393c 76057 0 2022-05-20 20:03:09 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-20 20:03:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2022-05-20 22:31:06 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2022-05-20 22:31:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:53 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:53 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 23:24:53 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 23:24:53 +0000 UTC,LastTransitionTime:2022-05-20 20:07:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a6deb87c5d6d4ca89be50c8f447a0e3c,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:67af2183-25fe-4024-95ea-e80edf7c8695,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 20 23:25:01.585: INFO: 
Logging kubelet events for node node2
May 20 23:25:01.587: INFO: 
Logging pods the kubelet thinks is on node node2
May 20 23:25:01.602: INFO: nodeport-update-service-skqd5 started at 2022-05-20 23:24:13 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container nodeport-update-service ready: true, restart count 0
May 20 23:25:01.602: INFO: cmk-init-discover-node2-b7gw4 started at 2022-05-20 20:15:53 +0000 UTC (0+3 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container discover ready: false, restart count 0
May 20 23:25:01.602: INFO: 	Container init ready: false, restart count 0
May 20 23:25:01.602: INFO: 	Container install ready: false, restart count 0
May 20 23:25:01.602: INFO: collectd-h4pzk started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container collectd ready: true, restart count 0
May 20 23:25:01.602: INFO: 	Container collectd-exporter ready: true, restart count 0
May 20 23:25:01.602: INFO: 	Container rbac-proxy ready: true, restart count 0
May 20 23:25:01.602: INFO: up-down-2-b7fkw started at 2022-05-20 23:23:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container up-down-2 ready: true, restart count 0
May 20 23:25:01.602: INFO: iperf2-server-deployment-59979d877-bmv7x started at 2022-05-20 23:24:09 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container iperf2-server ready: false, restart count 0
May 20 23:25:01.602: INFO: nginx-proxy-node2 started at 2022-05-20 20:03:09 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container nginx-proxy ready: true, restart count 2
May 20 23:25:01.602: INFO: kube-proxy-rg2fp started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container kube-proxy ready: true, restart count 2
May 20 23:25:01.602: INFO: kube-flannel-jpmpd started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Init container install-cni ready: true, restart count 1
May 20 23:25:01.602: INFO: 	Container kube-flannel ready: true, restart count 2
May 20 23:25:01.602: INFO: iperf2-clients-ql5nt started at 2022-05-20 23:24:13 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container iperf2-client ready: false, restart count 0
May 20 23:25:01.602: INFO: pod-server-1 started at 2022-05-20 23:23:54 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container agnhost-container ready: true, restart count 0
May 20 23:25:01.602: INFO: verify-service-up-host-exec-pod started at  (0+0 container statuses recorded)
May 20 23:25:01.602: INFO: service-proxy-toggled-x4qjm started at 2022-05-20 23:24:35 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container service-proxy-toggled ready: true, restart count 0
May 20 23:25:01.602: INFO: node-feature-discovery-worker-nphk9 started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container nfd-worker ready: true, restart count 0
May 20 23:25:01.602: INFO: service-proxy-toggled-bch5g started at 2022-05-20 23:24:35 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container service-proxy-toggled ready: true, restart count 0
May 20 23:25:01.602: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container kube-sriovdp ready: true, restart count 0
May 20 23:25:01.602: INFO: cmk-9hxtl started at 2022-05-20 20:16:16 +0000 UTC (0+2 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container nodereport ready: true, restart count 0
May 20 23:25:01.602: INFO: 	Container reconcile ready: true, restart count 0
May 20 23:25:01.602: INFO: node-exporter-vm24n started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:25:01.602: INFO: 	Container node-exporter ready: true, restart count 0
May 20 23:25:01.602: INFO: nodeport-update-service-lrn9m started at 2022-05-20 23:24:13 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container nodeport-update-service ready: true, restart count 0
May 20 23:25:01.602: INFO: execpodpnrdf started at 2022-05-20 23:24:26 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container agnhost-container ready: true, restart count 0
May 20 23:25:01.602: INFO: cmk-webhook-6c9d5f8578-5kbbc started at 2022-05-20 20:16:16 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container cmk-webhook ready: true, restart count 0
May 20 23:25:01.602: INFO: up-down-3-qrm62 started at 2022-05-20 23:24:30 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container up-down-3 ready: true, restart count 0
May 20 23:25:01.602: INFO: service-proxy-toggled-zg2kg started at 2022-05-20 23:24:35 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container service-proxy-toggled ready: true, restart count 0
May 20 23:25:01.602: INFO: kube-multus-ds-amd64-p22zp started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container kube-multus ready: true, restart count 1
May 20 23:25:01.602: INFO: kubernetes-metrics-scraper-5558854cb-66r9g started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
May 20 23:25:01.602: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd started at 2022-05-20 20:20:26 +0000 UTC (0+1 container statuses recorded)
May 20 23:25:01.602: INFO: 	Container tas-extender ready: true, restart count 0
May 20 23:25:01.934: INFO: 
Latency metrics for node node2
May 20 23:25:01.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-2943" for this suite.


• Failure [75.509 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130

  May 20 23:25:00.906: Failed to connect to backend 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":1,"skipped":395,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}
May 20 23:25:01.947: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:23:06.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
STEP: creating up-down-1 in namespace services-2834
STEP: creating service up-down-1 in namespace services-2834
STEP: creating replication controller up-down-1 in namespace services-2834
I0520 23:23:06.109750      27 runners.go:190] Created replication controller with name: up-down-1, namespace: services-2834, replica count: 3
I0520 23:23:09.162200      27 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:23:12.163312      27 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:23:15.164134      27 runners.go:190] up-down-1 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:23:18.166424      27 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating up-down-2 in namespace services-2834
STEP: creating service up-down-2 in namespace services-2834
STEP: creating replication controller up-down-2 in namespace services-2834
I0520 23:23:18.178224      27 runners.go:190] Created replication controller with name: up-down-2, namespace: services-2834, replica count: 3
I0520 23:23:21.229400      27 runners.go:190] up-down-2 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:23:24.230089      27 runners.go:190] up-down-2 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:23:27.231422      27 runners.go:190] up-down-2 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:23:30.231622      27 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-1 is up
May 20 23:23:30.233: INFO: Creating new host exec pod
May 20 23:23:30.246: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:32.248: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:34.250: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:36.249: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:38.249: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 20 23:23:38.249: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 20 23:23:48.347: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.3.10:80 2>&1 || true; echo; done" in pod services-2834/verify-service-up-host-exec-pod
May 20 23:23:48.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2834 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.3.10:80 2>&1 || true; echo; done'
May 20 23:23:49.039: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n"
May 20 23:23:49.040: INFO: stdout: "up-down-1-j844p\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-l2mml\n"
May 20 23:23:49.040: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.3.10:80 2>&1 || true; echo; done" in pod services-2834/verify-service-up-exec-pod-lpdgk
May 20 23:23:49.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2834 exec verify-service-up-exec-pod-lpdgk -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.3.10:80 2>&1 || true; echo; done'
May 20 23:23:49.600: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.3.10:80\n+ echo\n"
May 20 23:23:49.601: INFO: stdout: "up-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-l2mml\nup-down-1-j844p\nup-down-1-mt67q\nup-down-1-j844p\nup-down-1-l2mml\nup-down-1-j844p\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2834
STEP: Deleting pod verify-service-up-exec-pod-lpdgk in namespace services-2834
STEP: verifying service up-down-2 is up
May 20 23:23:49.615: INFO: Creating new host exec pod
May 20 23:23:49.625: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:51.630: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:23:53.628: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 20 23:23:53.628: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 20 23:24:01.645: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.55.247:80 2>&1 || true; echo; done" in pod services-2834/verify-service-up-host-exec-pod
May 20 23:24:01.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2834 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.55.247:80 2>&1 || true; echo; done'
May 20 23:24:02.481: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n"
May 20 23:24:02.481: INFO: stdout: "up-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\n"
May 20 23:24:02.482: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.55.247:80 2>&1 || true; echo; done" in pod services-2834/verify-service-up-exec-pod-rgjzn
May 20 23:24:02.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2834 exec verify-service-up-exec-pod-rgjzn -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.55.247:80 2>&1 || true; echo; done'
May 20 23:24:02.998: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n"
May 20 23:24:02.998: INFO: stdout: "up-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2834
STEP: Deleting pod verify-service-up-exec-pod-rgjzn in namespace services-2834
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-2834, will wait for the garbage collector to delete the pods
May 20 23:24:03.071: INFO: Deleting ReplicationController up-down-1 took: 3.519906ms
May 20 23:24:03.172: INFO: Terminating ReplicationController up-down-1 pods took: 101.099237ms
STEP: verifying service up-down-1 is not up
May 20 23:24:09.382: INFO: Creating new host exec pod
May 20 23:24:09.396: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:11.401: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May 20 23:24:11.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2834 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.3.10:80 && echo service-down-failed'
May 20 23:24:13.649: INFO: rc: 28
May 20 23:24:13.649: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.3.10:80 && echo service-down-failed" in pod services-2834/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2834 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.3.10:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.3.10:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2834
STEP: verifying service up-down-2 is still up
May 20 23:24:13.656: INFO: Creating new host exec pod
May 20 23:24:13.668: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:15.671: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:17.674: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:19.675: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 20 23:24:19.675: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 20 23:24:27.694: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.55.247:80 2>&1 || true; echo; done" in pod services-2834/verify-service-up-host-exec-pod
May 20 23:24:27.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2834 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.55.247:80 2>&1 || true; echo; done'
May 20 23:24:29.740: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n"
May 20 23:24:29.740: INFO: stdout: "up-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\n"
May 20 23:24:29.740: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.55.247:80 2>&1 || true; echo; done" in pod services-2834/verify-service-up-exec-pod-z2xsk
May 20 23:24:29.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2834 exec verify-service-up-exec-pod-z2xsk -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.55.247:80 2>&1 || true; echo; done'
May 20 23:24:30.145: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n"
May 20 23:24:30.146: INFO: stdout: "up-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2834
STEP: Deleting pod verify-service-up-exec-pod-z2xsk in namespace services-2834
STEP: creating service up-down-3 in namespace services-2834
STEP: creating service up-down-3 in namespace services-2834
STEP: creating replication controller up-down-3 in namespace services-2834
I0520 23:24:30.170416      27 runners.go:190] Created replication controller with name: up-down-3, namespace: services-2834, replica count: 3
I0520 23:24:33.221722      27 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:24:36.222073      27 runners.go:190] up-down-3 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:24:39.223253      27 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-2 is still up
May 20 23:24:39.225: INFO: Creating new host exec pod
May 20 23:24:39.237: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:41.242: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:43.241: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:45.240: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 20 23:24:45.240: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 20 23:24:51.260: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.55.247:80 2>&1 || true; echo; done" in pod services-2834/verify-service-up-host-exec-pod
May 20 23:24:51.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2834 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.55.247:80 2>&1 || true; echo; done'
May 20 23:24:51.644: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n"
May 20 23:24:51.644: INFO: stdout: "up-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\n"
May 20 23:24:51.645: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.55.247:80 2>&1 || true; echo; done" in pod services-2834/verify-service-up-exec-pod-wq5jz
May 20 23:24:51.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2834 exec verify-service-up-exec-pod-wq5jz -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.55.247:80 2>&1 || true; echo; done'
May 20 23:24:52.045: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.55.247:80\n+ echo\n"
May 20 23:24:52.046: INFO: stdout: "up-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-whrhh\nup-down-2-whrhh\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-b7fkw\nup-down-2-b7fkw\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\nup-down-2-6pjb4\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2834
STEP: Deleting pod verify-service-up-exec-pod-wq5jz in namespace services-2834
STEP: verifying service up-down-3 is up
May 20 23:24:52.060: INFO: Creating new host exec pod
May 20 23:24:52.073: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:54.076: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:56.077: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:58.076: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:00.076: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:02.077: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:04.076: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:06.078: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:08.076: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 20 23:25:08.076: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 20 23:25:12.095: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.61.248:80 2>&1 || true; echo; done" in pod services-2834/verify-service-up-host-exec-pod
May 20 23:25:12.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2834 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.61.248:80 2>&1 || true; echo; done'
May 20 23:25:12.465: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n"
May 20 23:25:12.465: INFO: stdout: "up-down-3-txdwc\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-qrm62\n"
May 20 23:25:12.466: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.61.248:80 2>&1 || true; echo; done" in pod services-2834/verify-service-up-exec-pod-qwjt4
May 20 23:25:12.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2834 exec verify-service-up-exec-pod-qwjt4 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.61.248:80 2>&1 || true; echo; done'
May 20 23:25:12.834: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.61.248:80\n+ echo\n"
May 20 23:25:12.834: INFO: stdout: "up-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-s8wr5\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-txdwc\nup-down-3-qrm62\nup-down-3-qrm62\nup-down-3-s8wr5\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2834
STEP: Deleting pod verify-service-up-exec-pod-qwjt4 in namespace services-2834
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:25:12.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2834" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:126.777 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":1,"skipped":309,"failed":0}
May 20 23:25:12.863: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:24:26.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
STEP: creating service-disabled in namespace services-2453
STEP: creating service service-proxy-disabled in namespace services-2453
STEP: creating replication controller service-proxy-disabled in namespace services-2453
I0520 23:24:26.750848      26 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-2453, replica count: 3
I0520 23:24:29.802166      26 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:24:32.804382      26 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:24:35.805217      26 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-2453
STEP: creating service service-proxy-toggled in namespace services-2453
STEP: creating replication controller service-proxy-toggled in namespace services-2453
I0520 23:24:35.820714      26 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-2453, replica count: 3
I0520 23:24:38.872117      26 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:24:41.873838      26 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:24:44.875021      26 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
May 20 23:24:44.878: INFO: Creating new host exec pod
May 20 23:24:44.893: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:46.897: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:48.898: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 20 23:24:48.899: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 20 23:24:52.920: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.228:80 2>&1 || true; echo; done" in pod services-2453/verify-service-up-host-exec-pod
May 20 23:24:52.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.228:80 2>&1 || true; echo; done'
May 20 23:24:53.280: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n"
May 20 23:24:53.281: INFO: stdout: "service-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\n"
May 20 23:24:53.281: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.228:80 2>&1 || true; echo; done" in pod services-2453/verify-service-up-exec-pod-6sm2r
May 20 23:24:53.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec verify-service-up-exec-pod-6sm2r -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.228:80 2>&1 || true; echo; done'
May 20 23:24:53.695: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n"
May 20 23:24:53.696: INFO: stdout: "service-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2453
STEP: Deleting pod verify-service-up-exec-pod-6sm2r in namespace services-2453
STEP: verifying service-disabled is not up
May 20 23:24:53.707: INFO: Creating new host exec pod
May 20 23:24:53.718: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:24:55.723: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May 20 23:24:55.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.43.136:80 && echo service-down-failed'
May 20 23:24:58.024: INFO: rc: 28
May 20 23:24:58.024: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.43.136:80 && echo service-down-failed" in pod services-2453/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.43.136:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.43.136:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2453
STEP: adding service-proxy-name label
STEP: verifying service is not up
May 20 23:24:58.039: INFO: Creating new host exec pod
May 20 23:24:58.051: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:00.056: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:02.057: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:04.056: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:06.056: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:08.056: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:10.055: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:12.055: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:14.056: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May 20 23:25:14.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.59.228:80 && echo service-down-failed'
May 20 23:25:16.315: INFO: rc: 28
May 20 23:25:16.316: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.59.228:80 && echo service-down-failed" in pod services-2453/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.59.228:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.59.228:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2453
STEP: removing service-proxy-name annotation
STEP: verifying service is up
May 20 23:25:16.327: INFO: Creating new host exec pod
May 20 23:25:16.342: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:18.346: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:20.346: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May 20 23:25:20.346: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May 20 23:25:24.366: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.228:80 2>&1 || true; echo; done" in pod services-2453/verify-service-up-host-exec-pod
May 20 23:25:24.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.228:80 2>&1 || true; echo; done'
May 20 23:25:24.766: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n"
May 20 23:25:24.766: INFO: stdout: "service-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\n"
May 20 23:25:24.766: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.228:80 2>&1 || true; echo; done" in pod services-2453/verify-service-up-exec-pod-t45sz
May 20 23:25:24.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec verify-service-up-exec-pod-t45sz -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.228:80 2>&1 || true; echo; done'
May 20 23:25:25.109: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.228:80\n+ echo\n"
May 20 23:25:25.110: INFO: stdout: "service-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-x4qjm\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-bch5g\nservice-proxy-toggled-zg2kg\nservice-proxy-toggled-x4qjm\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2453
STEP: Deleting pod verify-service-up-exec-pod-t45sz in namespace services-2453
STEP: verifying service-disabled is still not up
May 20 23:25:25.125: INFO: Creating new host exec pod
May 20 23:25:25.136: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:27.139: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May 20 23:25:29.143: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May 20 23:25:29.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.43.136:80 && echo service-down-failed'
May 20 23:25:31.410: INFO: rc: 28
May 20 23:25:31.410: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.43.136:80 && echo service-down-failed" in pod services-2453/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2453 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.43.136:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.43.136:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2453
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 20 23:25:31.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2453" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:64.706 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":5,"skipped":1088,"failed":0}
May 20 23:25:31.429: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 20 23:24:13.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-8999
May 20 23:24:13.938: INFO: Service Port TCP: 80
STEP: changing the TCP service to type=NodePort
STEP: creating replication controller nodeport-update-service in namespace services-8999
I0520 23:24:13.949597      37 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-8999, replica count: 2
I0520 23:24:17.004032      37 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:24:20.004135      37 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:24:23.009090      37 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0520 23:24:26.011660      37 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May 20 23:24:26.011: INFO: Creating new exec pod
May 20 23:24:33.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
May 20 23:24:33.289: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-update-service 80\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
May 20 23:24:33.289: INFO: stdout: "nodeport-update-service-lrn9m"
May 20 23:24:33.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.48.53 80'
May 20 23:24:33.592: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.48.53 80\nConnection to 10.233.48.53 80 port [tcp/http] succeeded!\n"
May 20 23:24:33.592: INFO: stdout: "nodeport-update-service-lrn9m"
May 20 23:24:33.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:33.987: INFO: rc: 1
May 20 23:24:33.987: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:34.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:35.375: INFO: rc: 1
May 20 23:24:35.375: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:35.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:36.356: INFO: rc: 1
May 20 23:24:36.356: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:36.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:37.491: INFO: rc: 1
May 20 23:24:37.491: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:37.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:38.390: INFO: rc: 1
May 20 23:24:38.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:38.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:39.681: INFO: rc: 1
May 20 23:24:39.681: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:39.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:40.493: INFO: rc: 1
May 20 23:24:40.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:40.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:41.369: INFO: rc: 1
May 20 23:24:41.369: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:41.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:42.402: INFO: rc: 1
May 20 23:24:42.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:42.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:43.276: INFO: rc: 1
May 20 23:24:43.276: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:43.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:44.244: INFO: rc: 1
May 20 23:24:44.244: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:44.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:45.779: INFO: rc: 1
May 20 23:24:45.779: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:45.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:46.596: INFO: rc: 1
May 20 23:24:46.596: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:46.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:47.406: INFO: rc: 1
May 20 23:24:47.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:47.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:48.454: INFO: rc: 1
May 20 23:24:48.455: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:48.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:49.380: INFO: rc: 1
May 20 23:24:49.380: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:49.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:50.249: INFO: rc: 1
May 20 23:24:50.249: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:50.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:51.246: INFO: rc: 1
May 20 23:24:51.246: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:51.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:52.310: INFO: rc: 1
May 20 23:24:52.310: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:52.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:53.239: INFO: rc: 1
May 20 23:24:53.239: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:53.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:54.232: INFO: rc: 1
May 20 23:24:54.232: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:54.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:55.250: INFO: rc: 1
May 20 23:24:55.250: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:55.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:56.326: INFO: rc: 1
May 20 23:24:56.326: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:56.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:57.244: INFO: rc: 1
May 20 23:24:57.244: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:57.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:58.275: INFO: rc: 1
May 20 23:24:58.275: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:58.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:24:59.274: INFO: rc: 1
May 20 23:24:59.274: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:24:59.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:00.239: INFO: rc: 1
May 20 23:25:00.239: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:00.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:01.228: INFO: rc: 1
May 20 23:25:01.228: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:01.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:02.230: INFO: rc: 1
May 20 23:25:02.230: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:02.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:03.235: INFO: rc: 1
May 20 23:25:03.235: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:03.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:04.242: INFO: rc: 1
May 20 23:25:04.242: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:04.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:05.262: INFO: rc: 1
May 20 23:25:05.262: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:05.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:06.255: INFO: rc: 1
May 20 23:25:06.255: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:06.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:07.554: INFO: rc: 1
May 20 23:25:07.554: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:07.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:08.225: INFO: rc: 1
May 20 23:25:08.225: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:08.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:09.241: INFO: rc: 1
May 20 23:25:09.241: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:09.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:10.237: INFO: rc: 1
May 20 23:25:10.237: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:10.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:11.251: INFO: rc: 1
May 20 23:25:11.251: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:11.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:12.226: INFO: rc: 1
May 20 23:25:12.226: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:12.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:13.229: INFO: rc: 1
May 20 23:25:13.230: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:13.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:14.239: INFO: rc: 1
May 20 23:25:14.239: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:14.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:15.233: INFO: rc: 1
May 20 23:25:15.233: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:15.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:16.246: INFO: rc: 1
May 20 23:25:16.246: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:16.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:17.241: INFO: rc: 1
May 20 23:25:17.241: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:17.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:18.228: INFO: rc: 1
May 20 23:25:18.228: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:18.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:19.260: INFO: rc: 1
May 20 23:25:19.260: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:19.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:20.285: INFO: rc: 1
May 20 23:25:20.285: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:20.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:21.297: INFO: rc: 1
May 20 23:25:21.298: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:21.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:22.398: INFO: rc: 1
May 20 23:25:22.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30907
+ echo hostName
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:22.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:23.251: INFO: rc: 1
May 20 23:25:23.251: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:23.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:24.239: INFO: rc: 1
May 20 23:25:24.239: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:24.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:25.334: INFO: rc: 1
May 20 23:25:25.334: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:25.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:26.227: INFO: rc: 1
May 20 23:25:26.227: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:26.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:27.235: INFO: rc: 1
May 20 23:25:27.235: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:27.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:28.241: INFO: rc: 1
May 20 23:25:28.241: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:28.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:29.237: INFO: rc: 1
May 20 23:25:29.237: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:29.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:30.233: INFO: rc: 1
May 20 23:25:30.233: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:30.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:31.213: INFO: rc: 1
May 20 23:25:31.213: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:31.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:32.223: INFO: rc: 1
May 20 23:25:32.223: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:32.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:33.249: INFO: rc: 1
May 20 23:25:33.249: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:33.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:34.227: INFO: rc: 1
May 20 23:25:34.227: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:34.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:35.232: INFO: rc: 1
May 20 23:25:35.232: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:35.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:36.252: INFO: rc: 1
May 20 23:25:36.252: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:36.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:37.263: INFO: rc: 1
May 20 23:25:37.263: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:37.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:38.250: INFO: rc: 1
May 20 23:25:38.250: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:38.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:39.507: INFO: rc: 1
May 20 23:25:39.507: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:39.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:40.214: INFO: rc: 1
May 20 23:25:40.214: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:40.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:41.237: INFO: rc: 1
May 20 23:25:41.237: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:41.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:42.244: INFO: rc: 1
May 20 23:25:42.244: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:42.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:43.236: INFO: rc: 1
May 20 23:25:43.236: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:43.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:44.264: INFO: rc: 1
May 20 23:25:44.264: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:44.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:45.923: INFO: rc: 1
May 20 23:25:45.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:45.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:46.320: INFO: rc: 1
May 20 23:25:46.320: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:46.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:47.247: INFO: rc: 1
May 20 23:25:47.247: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:47.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:48.248: INFO: rc: 1
May 20 23:25:48.248: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:48.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:49.264: INFO: rc: 1
May 20 23:25:49.264: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:49.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:50.258: INFO: rc: 1
May 20 23:25:50.258: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:50.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:51.237: INFO: rc: 1
May 20 23:25:51.237: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:51.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:52.268: INFO: rc: 1
May 20 23:25:52.268: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:52.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:53.286: INFO: rc: 1
May 20 23:25:53.286: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:53.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:54.305: INFO: rc: 1
May 20 23:25:54.305: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:54.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:55.242: INFO: rc: 1
May 20 23:25:55.242: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:55.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:56.246: INFO: rc: 1
May 20 23:25:56.246: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:56.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:57.250: INFO: rc: 1
May 20 23:25:57.250: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:57.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:58.263: INFO: rc: 1
May 20 23:25:58.263: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:58.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:25:59.236: INFO: rc: 1
May 20 23:25:59.236: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:25:59.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:00.243: INFO: rc: 1
May 20 23:26:00.244: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:00.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:01.238: INFO: rc: 1
May 20 23:26:01.238: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:01.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:02.249: INFO: rc: 1
May 20 23:26:02.249: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:02.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:03.247: INFO: rc: 1
May 20 23:26:03.247: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:03.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:04.218: INFO: rc: 1
May 20 23:26:04.218: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:04.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:05.241: INFO: rc: 1
May 20 23:26:05.241: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:05.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:06.219: INFO: rc: 1
May 20 23:26:06.219: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:06.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:07.224: INFO: rc: 1
May 20 23:26:07.224: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:07.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:08.322: INFO: rc: 1
May 20 23:26:08.322: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:08.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:09.238: INFO: rc: 1
May 20 23:26:09.238: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:09.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:10.225: INFO: rc: 1
May 20 23:26:10.225: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:10.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:11.235: INFO: rc: 1
May 20 23:26:11.235: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:11.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:12.215: INFO: rc: 1
May 20 23:26:12.216: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:12.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:13.218: INFO: rc: 1
May 20 23:26:13.218: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:13.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:14.230: INFO: rc: 1
May 20 23:26:14.230: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:14.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:15.766: INFO: rc: 1
May 20 23:26:15.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:15.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:16.234: INFO: rc: 1
May 20 23:26:16.234: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:16.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:17.239: INFO: rc: 1
May 20 23:26:17.239: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:17.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:18.312: INFO: rc: 1
May 20 23:26:18.312: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:18.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:19.247: INFO: rc: 1
May 20 23:26:19.247: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:19.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:20.229: INFO: rc: 1
May 20 23:26:20.229: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:20.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:21.257: INFO: rc: 1
May 20 23:26:21.257: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:21.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:22.241: INFO: rc: 1
May 20 23:26:22.241: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:22.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:23.249: INFO: rc: 1
May 20 23:26:23.249: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:23.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:24.236: INFO: rc: 1
May 20 23:26:24.236: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:24.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:25.247: INFO: rc: 1
May 20 23:26:25.247: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:25.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:26.268: INFO: rc: 1
May 20 23:26:26.268: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:26.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:27.267: INFO: rc: 1
May 20 23:26:27.267: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:27.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:28.254: INFO: rc: 1
May 20 23:26:28.254: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:28.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:29.216: INFO: rc: 1
May 20 23:26:29.216: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:29.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:30.239: INFO: rc: 1
May 20 23:26:30.239: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:30.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:31.232: INFO: rc: 1
May 20 23:26:31.232: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:31.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:32.252: INFO: rc: 1
May 20 23:26:32.252: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:32.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:33.222: INFO: rc: 1
May 20 23:26:33.222: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:33.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:34.236: INFO: rc: 1
May 20 23:26:34.236: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:34.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907'
May 20 23:26:34.527: INFO: rc: 1
May 20 23:26:34.527: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8999 exec execpodpnrdf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30907:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30907
nc: connect to 10.10.190.207 port 30907 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May 20 23:26:34.528: FAIL: Unexpected error:
    <*errors.errorString | 0xc003ad7450>: {
        s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30907 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30907 over TCP protocol
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.13()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001b00f00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001b00f00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001b00f00, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
May 20 23:26:34.529: INFO: Cleaning up the updating NodePorts test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-8999".
STEP: Found 17 events.
May 20 23:26:34.554: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodpnrdf: { } Scheduled: Successfully assigned services-8999/execpodpnrdf to node2
May 20 23:26:34.554: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-lrn9m: { } Scheduled: Successfully assigned services-8999/nodeport-update-service-lrn9m to node2
May 20 23:26:34.554: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-skqd5: { } Scheduled: Successfully assigned services-8999/nodeport-update-service-skqd5 to node2
May 20 23:26:34.554: INFO: At 2022-05-20 23:24:13 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-lrn9m
May 20 23:26:34.554: INFO: At 2022-05-20 23:24:13 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-skqd5
May 20 23:26:34.554: INFO: At 2022-05-20 23:24:18 +0000 UTC - event for nodeport-update-service-lrn9m: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 20 23:26:34.554: INFO: At 2022-05-20 23:24:18 +0000 UTC - event for nodeport-update-service-skqd5: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 20 23:26:34.554: INFO: At 2022-05-20 23:24:19 +0000 UTC - event for nodeport-update-service-lrn9m: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 697.303413ms
May 20 23:26:34.554: INFO: At 2022-05-20 23:24:19 +0000 UTC - event for nodeport-update-service-lrn9m: {kubelet node2} Created: Created container nodeport-update-service
May 20 23:26:34.554: INFO: At 2022-05-20 23:24:19 +0000 UTC - event for nodeport-update-service-skqd5: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 971.983276ms
May 20 23:26:34.554: INFO: At 2022-05-20 23:24:20 +0000 UTC - event for nodeport-update-service-skqd5: {kubelet node2} Created: Created container nodeport-update-service
May 20 23:26:34.554: INFO: At 2022-05-20 23:24:21 +0000 UTC - event for nodeport-update-service-lrn9m: {kubelet node2} Started: Started container nodeport-update-service
May 20 23:26:34.554: INFO: At 2022-05-20 23:24:22 +0000 UTC - event for nodeport-update-service-skqd5: {kubelet node2} Started: Started container nodeport-update-service
May 20 23:26:34.554: INFO: At 2022-05-20 23:24:27 +0000 UTC - event for execpodpnrdf: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May 20 23:26:34.554: INFO: At 2022-05-20 23:24:28 +0000 UTC - event for execpodpnrdf: {kubelet node2} Started: Started container agnhost-container
May 20 23:26:34.554: INFO: At 2022-05-20 23:24:28 +0000 UTC - event for execpodpnrdf: {kubelet node2} Created: Created container agnhost-container
May 20 23:26:34.555: INFO: At 2022-05-20 23:24:28 +0000 UTC - event for execpodpnrdf: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 337.452257ms
May 20 23:26:34.558: INFO: POD                            NODE   PHASE    GRACE  CONDITIONS
May 20 23:26:34.558: INFO: execpodpnrdf                   node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:24:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:24:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:24:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:24:26 +0000 UTC  }]
May 20 23:26:34.558: INFO: nodeport-update-service-lrn9m  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:24:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:24:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:24:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:24:13 +0000 UTC  }]
May 20 23:26:34.558: INFO: nodeport-update-service-skqd5  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:24:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:24:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:24:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-20 23:24:13 +0000 UTC  }]
May 20 23:26:34.558: INFO: 
May 20 23:26:34.563: INFO: 
Logging node info for node master1
May 20 23:26:34.567: INFO: Node Info: &Node{ObjectMeta:{master1    b016dcf2-74b7-4456-916a-8ca363b9ccc3 76773 0 2022-05-20 20:01:28 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-20 20:01:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-20 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-05-20 20:09:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-05-20 20:12:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:07 +0000 UTC,LastTransitionTime:2022-05-20 20:07:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 23:26:30 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 23:26:30 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 23:26:30 +0000 UTC,LastTransitionTime:2022-05-20 20:01:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 23:26:30 +0000 UTC,LastTransitionTime:2022-05-20 20:04:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e9847a94929d4465bdf672fd6e82b77d,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a01e5bd5-a73c-4ab6-b80a-cab509b05bc6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 20 23:26:34.568: INFO: 
Logging kubelet events for node master1
May 20 23:26:34.571: INFO: 
Logging pods the kubelet thinks is on node master1
May 20 23:26:34.600: INFO: kube-apiserver-master1 started at 2022-05-20 20:02:32 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.600: INFO: 	Container kube-apiserver ready: true, restart count 0
May 20 23:26:34.600: INFO: kube-controller-manager-master1 started at 2022-05-20 20:10:37 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.600: INFO: 	Container kube-controller-manager ready: true, restart count 3
May 20 23:26:34.600: INFO: kube-proxy-rgxh2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.600: INFO: 	Container kube-proxy ready: true, restart count 2
May 20 23:26:34.600: INFO: kube-flannel-tzq8g started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded)
May 20 23:26:34.600: INFO: 	Init container install-cni ready: true, restart count 2
May 20 23:26:34.600: INFO: 	Container kube-flannel ready: true, restart count 1
May 20 23:26:34.600: INFO: node-feature-discovery-controller-cff799f9f-nq7tc started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.600: INFO: 	Container nfd-controller ready: true, restart count 0
May 20 23:26:34.600: INFO: node-exporter-4rvrg started at 2022-05-20 20:17:21 +0000 UTC (0+2 container statuses recorded)
May 20 23:26:34.600: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:26:34.600: INFO: 	Container node-exporter ready: true, restart count 0
May 20 23:26:34.600: INFO: kube-scheduler-master1 started at 2022-05-20 20:20:27 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.600: INFO: 	Container kube-scheduler ready: true, restart count 1
May 20 23:26:34.600: INFO: kube-multus-ds-amd64-k8cb6 started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.600: INFO: 	Container kube-multus ready: true, restart count 1
May 20 23:26:34.600: INFO: container-registry-65d7c44b96-n94w5 started at 2022-05-20 20:08:47 +0000 UTC (0+2 container statuses recorded)
May 20 23:26:34.600: INFO: 	Container docker-registry ready: true, restart count 0
May 20 23:26:34.600: INFO: 	Container nginx ready: true, restart count 0
May 20 23:26:34.600: INFO: prometheus-operator-585ccfb458-bl62n started at 2022-05-20 20:17:13 +0000 UTC (0+2 container statuses recorded)
May 20 23:26:34.600: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:26:34.600: INFO: 	Container prometheus-operator ready: true, restart count 0
May 20 23:26:34.702: INFO: 
Latency metrics for node master1
May 20 23:26:34.702: INFO: 
Logging node info for node master2
May 20 23:26:34.705: INFO: Node Info: &Node{ObjectMeta:{master2    ddc04b08-e43a-4e18-a612-aa3bf7f8411e 76770 0 2022-05-20 20:01:56 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-20 20:01:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 23:26:29 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 23:26:29 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 23:26:29 +0000 UTC,LastTransitionTime:2022-05-20 20:01:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 23:26:29 +0000 UTC,LastTransitionTime:2022-05-20 20:04:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:63d829bfe81540169bcb84ee465e884a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:fc4aead3-0f07-477a-9f91-3902c50ddf48,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 20 23:26:34.705: INFO: 
Logging kubelet events for node master2
May 20 23:26:34.707: INFO: 
Logging pods the kubelet thinks is on node master2
May 20 23:26:34.722: INFO: kube-scheduler-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.722: INFO: 	Container kube-scheduler ready: true, restart count 3
May 20 23:26:34.722: INFO: kube-multus-ds-amd64-97fkc started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.722: INFO: 	Container kube-multus ready: true, restart count 1
May 20 23:26:34.722: INFO: kube-flannel-wj7hl started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded)
May 20 23:26:34.722: INFO: 	Init container install-cni ready: true, restart count 2
May 20 23:26:34.722: INFO: 	Container kube-flannel ready: true, restart count 1
May 20 23:26:34.722: INFO: coredns-8474476ff8-tjnfw started at 2022-05-20 20:04:46 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.722: INFO: 	Container coredns ready: true, restart count 1
May 20 23:26:34.722: INFO: dns-autoscaler-7df78bfcfb-5qj9t started at 2022-05-20 20:04:48 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.722: INFO: 	Container autoscaler ready: true, restart count 1
May 20 23:26:34.722: INFO: node-exporter-jfg4p started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded)
May 20 23:26:34.722: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:26:34.722: INFO: 	Container node-exporter ready: true, restart count 0
May 20 23:26:34.722: INFO: kube-apiserver-master2 started at 2022-05-20 20:02:34 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.722: INFO: 	Container kube-apiserver ready: true, restart count 0
May 20 23:26:34.722: INFO: kube-controller-manager-master2 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.722: INFO: 	Container kube-controller-manager ready: true, restart count 2
May 20 23:26:34.722: INFO: kube-proxy-wfzg2 started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.722: INFO: 	Container kube-proxy ready: true, restart count 1
May 20 23:26:34.813: INFO: 
Latency metrics for node master2
May 20 23:26:34.813: INFO: 
Logging node info for node master3
May 20 23:26:34.817: INFO: Node Info: &Node{ObjectMeta:{master3    f42c1bd6-d828-4857-9180-56c73dcc370f 76767 0 2022-05-20 20:02:05 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-20 20:02:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-20 20:04:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-20 20:04:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-20 20:14:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:09 +0000 UTC,LastTransitionTime:2022-05-20 20:07:09 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 23:26:28 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 23:26:28 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 23:26:28 +0000 UTC,LastTransitionTime:2022-05-20 20:02:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 23:26:28 +0000 UTC,LastTransitionTime:2022-05-20 20:04:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6a2131d65a6f41c3b857ed7d5f7d9f9f,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fa6d1c6-058c-482a-97f3-d7e9e817b36a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 20 23:26:34.817: INFO: 
Logging kubelet events for node master3
May 20 23:26:34.820: INFO: 
Logging pods the kubelet thinks is on node master3
May 20 23:26:34.835: INFO: kube-flannel-bwb5w started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded)
May 20 23:26:34.835: INFO: 	Init container install-cni ready: true, restart count 0
May 20 23:26:34.835: INFO: 	Container kube-flannel ready: true, restart count 2
May 20 23:26:34.835: INFO: kube-controller-manager-master3 started at 2022-05-20 20:10:36 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.835: INFO: 	Container kube-controller-manager ready: true, restart count 1
May 20 23:26:34.835: INFO: kube-scheduler-master3 started at 2022-05-20 20:02:33 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.835: INFO: 	Container kube-scheduler ready: true, restart count 2
May 20 23:26:34.835: INFO: kube-proxy-rsqzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.835: INFO: 	Container kube-proxy ready: true, restart count 2
May 20 23:26:34.836: INFO: node-exporter-zgxkr started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded)
May 20 23:26:34.836: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:26:34.836: INFO: 	Container node-exporter ready: true, restart count 0
May 20 23:26:34.836: INFO: kube-apiserver-master3 started at 2022-05-20 20:02:05 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.836: INFO: 	Container kube-apiserver ready: true, restart count 0
May 20 23:26:34.836: INFO: kube-multus-ds-amd64-ch8bd started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.836: INFO: 	Container kube-multus ready: true, restart count 1
May 20 23:26:34.836: INFO: coredns-8474476ff8-4szxh started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.836: INFO: 	Container coredns ready: true, restart count 1
May 20 23:26:34.912: INFO: 
Latency metrics for node master3
May 20 23:26:34.912: INFO: 
Logging node info for node node1
May 20 23:26:34.915: INFO: Node Info: &Node{ObjectMeta:{node1    65c381dd-b6f5-4e67-a327-7a45366d15af 76768 0 2022-05-20 20:03:10 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-20 22:31:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-05-20 22:57:29 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 23:26:28 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 23:26:28 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 23:26:28 +0000 UTC,LastTransitionTime:2022-05-20 20:03:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 23:26:28 +0000 UTC,LastTransitionTime:2022-05-20 20:04:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f2f0a31e38e446cda6cf4c679d8a2ef5,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:c988afd2-8149-4515-9a6f-832552c2ed2d,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977757,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 20 23:26:34.916: INFO: 
Logging kubelet events for node node1
May 20 23:26:34.918: INFO: 
Logging pods the kubelet thinks is on node node1
May 20 23:26:34.936: INFO: nginx-proxy-node1 started at 2022-05-20 20:06:57 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.936: INFO: 	Container nginx-proxy ready: true, restart count 2
May 20 23:26:34.936: INFO: prometheus-k8s-0 started at 2022-05-20 20:17:30 +0000 UTC (0+4 container statuses recorded)
May 20 23:26:34.936: INFO: 	Container config-reloader ready: true, restart count 0
May 20 23:26:34.936: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
May 20 23:26:34.936: INFO: 	Container grafana ready: true, restart count 0
May 20 23:26:34.936: INFO: 	Container prometheus ready: true, restart count 1
May 20 23:26:34.936: INFO: collectd-875j8 started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded)
May 20 23:26:34.936: INFO: 	Container collectd ready: true, restart count 0
May 20 23:26:34.936: INFO: 	Container collectd-exporter ready: true, restart count 0
May 20 23:26:34.936: INFO: 	Container rbac-proxy ready: true, restart count 0
May 20 23:26:34.936: INFO: node-feature-discovery-worker-rh55h started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.936: INFO: 	Container nfd-worker ready: true, restart count 0
May 20 23:26:34.936: INFO: cmk-init-discover-node1-vkzkd started at 2022-05-20 20:15:33 +0000 UTC (0+3 container statuses recorded)
May 20 23:26:34.936: INFO: 	Container discover ready: false, restart count 0
May 20 23:26:34.936: INFO: 	Container init ready: false, restart count 0
May 20 23:26:34.936: INFO: 	Container install ready: false, restart count 0
May 20 23:26:34.936: INFO: kube-flannel-2blt7 started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded)
May 20 23:26:34.936: INFO: 	Init container install-cni ready: true, restart count 2
May 20 23:26:34.936: INFO: 	Container kube-flannel ready: true, restart count 3
May 20 23:26:34.936: INFO: kube-proxy-v8kzq started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.936: INFO: 	Container kube-proxy ready: true, restart count 2
May 20 23:26:34.936: INFO: cmk-c5x47 started at 2022-05-20 20:16:15 +0000 UTC (0+2 container statuses recorded)
May 20 23:26:34.936: INFO: 	Container nodereport ready: true, restart count 0
May 20 23:26:34.936: INFO: 	Container reconcile ready: true, restart count 0
May 20 23:26:34.936: INFO: kube-multus-ds-amd64-krd6m started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.936: INFO: 	Container kube-multus ready: true, restart count 1
May 20 23:26:34.936: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.937: INFO: 	Container kubernetes-dashboard ready: true, restart count 2
May 20 23:26:34.937: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:34.937: INFO: 	Container kube-sriovdp ready: true, restart count 0
May 20 23:26:34.937: INFO: node-exporter-czwvh started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded)
May 20 23:26:34.937: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:26:34.937: INFO: 	Container node-exporter ready: true, restart count 0
May 20 23:26:35.135: INFO: 
Latency metrics for node node1
May 20 23:26:35.135: INFO: 
Logging node info for node node2
May 20 23:26:35.138: INFO: Node Info: &Node{ObjectMeta:{node2    a0e0a426-876d-4419-96e4-c6977ef3393c 76782 0 2022-05-20 20:03:09 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-20 20:03:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-20 20:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-20 20:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-20 20:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-20 20:15:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2022-05-20 22:31:06 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2022-05-20 22:31:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-20 20:07:03 +0000 UTC,LastTransitionTime:2022-05-20 20:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-20 23:26:34 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-20 23:26:34 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-20 23:26:34 +0000 UTC,LastTransitionTime:2022-05-20 20:03:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-20 23:26:34 +0000 UTC,LastTransitionTime:2022-05-20 20:07:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a6deb87c5d6d4ca89be50c8f447a0e3c,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:67af2183-25fe-4024-95ea-e80edf7c8695,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[localhost:30500/cmk@sha256:1b6fdb10d02a95904d28fbec7317b3044b913b4572405caf5a5b4f305481ce37 localhost:30500/cmk:v1.5.1],SizeBytes:727687197,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bcea5fd975bec7f8eb179f896b3a007090d081bd13d974bdb01eedd94cdd88b1 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:f65735add9b770eec74999948d1a43963106c14a89579d0158e1ec3a1bae070e localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May 20 23:26:35.139: INFO: 
Logging kubelet events for node node2
May 20 23:26:35.142: INFO: 
Logging pods the kubelet thinks is on node node2
May 20 23:26:35.159: INFO: node-feature-discovery-worker-nphk9 started at 2022-05-20 20:11:58 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:35.159: INFO: 	Container nfd-worker ready: true, restart count 0
May 20 23:26:35.159: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk started at 2022-05-20 20:13:08 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:35.159: INFO: 	Container kube-sriovdp ready: true, restart count 0
May 20 23:26:35.159: INFO: cmk-9hxtl started at 2022-05-20 20:16:16 +0000 UTC (0+2 container statuses recorded)
May 20 23:26:35.159: INFO: 	Container nodereport ready: true, restart count 0
May 20 23:26:35.159: INFO: 	Container reconcile ready: true, restart count 0
May 20 23:26:35.159: INFO: node-exporter-vm24n started at 2022-05-20 20:17:20 +0000 UTC (0+2 container statuses recorded)
May 20 23:26:35.159: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May 20 23:26:35.159: INFO: 	Container node-exporter ready: true, restart count 0
May 20 23:26:35.159: INFO: nodeport-update-service-lrn9m started at 2022-05-20 23:24:13 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:35.159: INFO: 	Container nodeport-update-service ready: true, restart count 0
May 20 23:26:35.159: INFO: execpodpnrdf started at 2022-05-20 23:24:26 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:35.159: INFO: 	Container agnhost-container ready: true, restart count 0
May 20 23:26:35.159: INFO: cmk-webhook-6c9d5f8578-5kbbc started at 2022-05-20 20:16:16 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:35.159: INFO: 	Container cmk-webhook ready: true, restart count 0
May 20 23:26:35.159: INFO: kube-multus-ds-amd64-p22zp started at 2022-05-20 20:04:18 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:35.159: INFO: 	Container kube-multus ready: true, restart count 1
May 20 23:26:35.159: INFO: kubernetes-metrics-scraper-5558854cb-66r9g started at 2022-05-20 20:04:50 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:35.159: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
May 20 23:26:35.159: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd started at 2022-05-20 20:20:26 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:35.159: INFO: 	Container tas-extender ready: true, restart count 0
May 20 23:26:35.159: INFO: cmk-init-discover-node2-b7gw4 started at 2022-05-20 20:15:53 +0000 UTC (0+3 container statuses recorded)
May 20 23:26:35.159: INFO: 	Container discover ready: false, restart count 0
May 20 23:26:35.159: INFO: 	Container init ready: false, restart count 0
May 20 23:26:35.159: INFO: 	Container install ready: false, restart count 0
May 20 23:26:35.159: INFO: collectd-h4pzk started at 2022-05-20 20:21:17 +0000 UTC (0+3 container statuses recorded)
May 20 23:26:35.159: INFO: 	Container collectd ready: true, restart count 0
May 20 23:26:35.159: INFO: 	Container collectd-exporter ready: true, restart count 0
May 20 23:26:35.159: INFO: 	Container rbac-proxy ready: true, restart count 0
May 20 23:26:35.159: INFO: nodeport-update-service-skqd5 started at 2022-05-20 23:24:13 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:35.159: INFO: 	Container nodeport-update-service ready: true, restart count 0
May 20 23:26:35.159: INFO: nginx-proxy-node2 started at 2022-05-20 20:03:09 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:35.159: INFO: 	Container nginx-proxy ready: true, restart count 2
May 20 23:26:35.159: INFO: kube-proxy-rg2fp started at 2022-05-20 20:03:14 +0000 UTC (0+1 container statuses recorded)
May 20 23:26:35.159: INFO: 	Container kube-proxy ready: true, restart count 2
May 20 23:26:35.159: INFO: kube-flannel-jpmpd started at 2022-05-20 20:04:10 +0000 UTC (1+1 container statuses recorded)
May 20 23:26:35.159: INFO: 	Init container install-cni ready: true, restart count 1
May 20 23:26:35.159: INFO: 	Container kube-flannel ready: true, restart count 2
May 20 23:26:35.362: INFO: 
Latency metrics for node node2
May 20 23:26:35.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8999" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• Failure [141.461 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  May 20 23:26:34.528: Unexpected error:
      <*errors.errorString | 0xc003ad7450>: {
          s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30907 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30907 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":3,"skipped":666,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
May 20 23:26:35.379: INFO: Running AfterSuite actions on all nodes


{"msg":"FAILED [sig-network] Networking Granular Checks: Services should function for node-Service: http","total":-1,"completed":0,"skipped":284,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for node-Service: http"]}
May 20 23:24:40.443: INFO: Running AfterSuite actions on all nodes
May 20 23:26:35.410: INFO: Running AfterSuite actions on node 1
May 20 23:26:35.410: INFO: Skipping dumping logs from cluster



Summarizing 3 Failures:

[Fail] [sig-network] Networking Granular Checks: Services [It] should function for node-Service: http 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Conntrack [It] should be able to preserve UDP traffic when server pod cycles for a NodePort service 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

Ran 27 of 5773 Specs in 240.546 seconds
FAIL! -- 24 Passed | 3 Failed | 0 Pending | 5746 Skipped


Ginkgo ran 1 suite in 4m2.249327133s
Test Suite Failed