Running Suite: Kubernetes e2e suite =================================== Random Seed: 1651879673 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes May 6 23:27:55.351: INFO: >>> kubeConfig: /root/.kube/config May 6 23:27:55.353: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 6 23:27:55.382: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 6 23:27:55.448: INFO: The status of Pod cmk-init-discover-node1-tp69t is Succeeded, skipping waiting May 6 23:27:55.448: INFO: The status of Pod cmk-init-discover-node2-kt2nj is Succeeded, skipping waiting May 6 23:27:55.448: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 6 23:27:55.448: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 6 23:27:55.448: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 6 23:27:55.466: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 6 23:27:55.466: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 6 23:27:55.466: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 6 23:27:55.466: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 6 23:27:55.466: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 6 23:27:55.466: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 6 23:27:55.466: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 6 23:27:55.466: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 6 23:27:55.466: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 6 23:27:55.466: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 6 23:27:55.466: INFO: e2e test version: v1.21.9 May 6 23:27:55.467: INFO: kube-apiserver version: v1.21.1 May 6 23:27:55.468: INFO: >>> kubeConfig: /root/.kube/config May 6 23:27:55.475: INFO: Cluster IP family: ipv4 May 6 23:27:55.472: INFO: >>> kubeConfig: /root/.kube/config May 6 23:27:55.494: INFO: Cluster IP family: ipv4 May 6 23:27:55.474: INFO: >>> kubeConfig: /root/.kube/config May 6 23:27:55.494: INFO: Cluster IP family: ipv4 May 6 23:27:55.480: INFO: >>> kubeConfig: /root/.kube/config May 6 23:27:55.503: INFO: Cluster IP family: ipv4 May 6 23:27:55.492: INFO: >>> kubeConfig: /root/.kube/config May 6 23:27:55.515: INFO: Cluster IP family: ipv4 May 6 23:27:55.504: INFO: >>> kubeConfig: /root/.kube/config May 6 23:27:55.526: INFO: Cluster IP family: ipv4 May 6 23:27:55.505: INFO: >>> kubeConfig: /root/.kube/config May 6 23:27:55.527: INFO: Cluster IP family: ipv4 May 6 23:27:55.505: INFO: >>> kubeConfig: /root/.kube/config May 6 23:27:55.526: INFO: Cluster IP family: ipv4 May 6 23:27:55.516: INFO: >>> kubeConfig: /root/.kube/config May 6 23:27:55.537: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 6 23:27:55.567: INFO: >>> kubeConfig: /root/.kube/config May 6 23:27:55.588: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:27:55.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp W0506 23:27:55.585715 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 23:27:55.585: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 23:27:55.589: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 May 6 23:27:55.591: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:27:55.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-3142" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.042 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should only target nodes with endpoints [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:27:55.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp W0506 23:27:55.792435 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 23:27:55.792: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 23:27:55.794: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 May 6 23:27:55.796: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 23:27:55.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-4764" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should work from pods [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 23:27:56.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy W0506 23:27:56.350591 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 23:27:56.350: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 23:27:56.352: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85 May 6 23:27:57.588: INFO: (0) /api/v1/nodes/node1:10250/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0506 23:27:56.223263      39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May  6 23:27:56.223: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May  6 23:27:56.225: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153
STEP: Performing setup for networking test in namespace nettest-5525
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:27:56.368: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:27:56.399: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:27:58.403: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:00.404: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:02.404: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:04.403: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:06.404: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:08.406: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:10.405: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:12.404: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:14.403: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:16.403: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:18.403: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:28:18.409: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:28:22.432: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:28:22.432: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:28:22.440: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:28:22.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5525" for this suite.


S [SKIPPING] [26.250 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:28:22.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
May  6 23:28:22.741: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:28:22.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-4647" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have correct firewall rules for e2e cluster [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:27:55.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0506 23:27:55.623547      35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May  6 23:27:55.623: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May  6 23:27:55.627: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should support basic nodePort: udp functionality
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387
STEP: Performing setup for networking test in namespace nettest-9656
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:27:55.738: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:27:55.768: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:27:57.774: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:27:59.773: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:01.773: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:03.771: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:05.772: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:07.773: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:09.774: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:11.774: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:13.773: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:15.774: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:28:15.778: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:28:25.815: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:28:25.815: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:28:25.821: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:28:25.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9656" for this suite.


S [SKIPPING] [30.232 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should support basic nodePort: udp functionality [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:27:55.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0506 23:27:55.728900      26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May  6 23:27:55.729: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May  6 23:27:55.731: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351
STEP: Performing setup for networking test in namespace nettest-4509
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:27:55.846: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:27:55.879: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:27:57.884: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:27:59.883: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:01.883: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:03.883: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:05.883: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:07.884: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:09.882: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:11.884: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:13.883: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:15.884: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:17.884: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:28:17.889: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:28:25.912: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:28:25.912: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:28:25.919: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:28:25.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4509" for this suite.


S [SKIPPING] [30.221 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:27:55.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0506 23:27:55.815594      33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May  6 23:27:55.815: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May  6 23:27:55.817: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for multiple endpoint-Services with same selector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289
STEP: Performing setup for networking test in namespace nettest-1345
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:27:55.955: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:27:55.985: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:27:57.990: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:27:59.992: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:01.990: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:03.989: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:05.990: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:07.990: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:09.989: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:11.989: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:13.989: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:15.989: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:28:15.995: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  6 23:28:17.999: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:28:28.022: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:28:28.022: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:28:28.028: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:28:28.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1345" for this suite.


S [SKIPPING] [32.246 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for multiple endpoint-Services with same selector [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:28:28.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should check NodePort out-of-range
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494
STEP: creating service nodeport-range-test with type NodePort in namespace services-2935
STEP: changing service nodeport-range-test to out-of-range NodePort 62093
STEP: deleting original service nodeport-range-test
STEP: creating service nodeport-range-test with out-of-range NodePort 62093
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:28:28.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2935" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":1,"skipped":107,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:27:56.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0506 23:27:56.075706      24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May  6 23:27:56.076: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May  6 23:27:56.078: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: udp [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
STEP: Performing setup for networking test in namespace nettest-3834
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:27:56.212: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:27:56.245: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:27:58.249: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:00.253: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:02.251: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:04.249: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:06.249: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:08.249: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:10.249: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:12.250: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:14.249: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:16.249: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:18.249: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:28:18.254: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:28:28.342: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:28:28.342: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:28:28.348: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:28:28.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3834" for this suite.


S [SKIPPING] [32.307 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: udp [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:27:55.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0506 23:27:55.669306      27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May  6 23:27:55.669: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May  6 23:27:55.671: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
STEP: Performing setup for networking test in namespace nettest-7523
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:27:55.787: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:27:55.818: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:27:57.821: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:27:59.825: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:01.822: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:03.822: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:05.822: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:07.827: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:09.822: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:11.826: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:13.820: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:15.823: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:28:15.827: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:28:25.845: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:28:25.845: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
May  6 23:28:25.868: INFO: Service node-port-service in namespace nettest-7523 found.
May  6 23:28:25.884: INFO: Service session-affinity-service in namespace nettest-7523 found.
STEP: Waiting for NodePort service to expose endpoint
May  6 23:28:26.886: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
May  6 23:28:27.890: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) test-container-pod --> 10.233.8.111:90 (config.clusterIP)
May  6 23:28:27.896: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.31:9080/dial?request=echo%20nooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo&protocol=udp&host=10.233.8.111&port=90&tries=1'] Namespace:nettest-7523 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May  6 23:28:27.896: INFO: >>> kubeConfig: /root/.kube/config
May  6 23:28:28.796: INFO: Waiting for responses: map[]
May  6 23:28:28.796: INFO: reached 10.233.8.111 after 0/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:28:28.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7523" for this suite.


• [SLOW TEST:33.161 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: udp
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp","total":-1,"completed":1,"skipped":58,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:27:55.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
STEP: Performing setup for networking test in namespace nettest-3055
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:27:55.975: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:27:56.007: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:27:58.011: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:00.010: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:02.014: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:04.010: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:06.014: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:08.011: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:10.013: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:12.011: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:14.009: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:16.009: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:18.012: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:20.012: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:22.014: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:24.009: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:26.010: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:28:26.015: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:28:36.055: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:28:36.055: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:28:36.061: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:28:36.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3055" for this suite.


S [SKIPPING] [40.246 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: http [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:28:36.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
May  6 23:28:36.142: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:28:36.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-3189" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  control plane should not expose well-known ports [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:27:56.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198
STEP: Performing setup for networking test in namespace nettest-5247
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:27:56.465: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:27:56.495: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:27:58.497: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:00.497: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:02.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:04.529: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:06.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:08.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:10.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:12.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:14.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:16.498: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:18.499: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:20.498: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:22.498: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:24.499: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:26.499: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:28.499: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:28:28.505: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:28:42.543: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:28:42.543: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:28:42.551: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:28:42.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5247" for this suite.


S [SKIPPING] [46.237 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:28:28.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-251
May  6 23:28:28.456: INFO: hairpin-test cluster ip: 10.233.16.230
STEP: creating a client/server pod
May  6 23:28:28.470: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:30.474: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:32.477: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:34.475: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:36.473: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:38.474: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:40.474: INFO: The status of Pod hairpin is Running (Ready = true)
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-251 to expose endpoints map[hairpin:[8080]]
May  6 23:28:40.483: INFO: successfully validated that service hairpin-test in namespace services-251 exposes endpoints map[hairpin:[8080]]
STEP: Checking if the pod can reach itself
May  6 23:28:41.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-251 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
May  6 23:28:42.733: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 hairpin-test 8080\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n"
May  6 23:28:42.733: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
May  6 23:28:42.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-251 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.16.230 8080'
May  6 23:28:43.111: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.16.230 8080\nConnection to 10.233.16.230 8080 port [tcp/http-alt] succeeded!\n"
May  6 23:28:43.111: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:28:43.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-251" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:14.693 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":2,"skipped":258,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:28:36.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
STEP: Running container which tries to connect to 8.8.8.8
May  6 23:28:36.576: INFO: Waiting up to 5m0s for pod "connectivity-test" in namespace "nettest-7908" to be "Succeeded or Failed"
May  6 23:28:36.578: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1.98468ms
May  6 23:28:38.583: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006139631s
May  6 23:28:40.585: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008349529s
May  6 23:28:42.588: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011179211s
May  6 23:28:44.593: INFO: Pod "connectivity-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01690524s
STEP: Saw pod success
May  6 23:28:44.593: INFO: Pod "connectivity-test" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:28:44.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7908" for this suite.


• [SLOW TEST:8.175 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
------------------------------
{"msg":"PASSED [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]","total":-1,"completed":1,"skipped":251,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:28:42.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-184.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-184.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-184.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-184.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-184.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-184.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  6 23:28:52.753: INFO: DNS probes using dns-184/dns-test-d6c0d0f5-812e-4bf0-b35b-0473c45b2bb4 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:28:52.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-184" for this suite.


• [SLOW TEST:10.113 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:28:26.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242
STEP: Performing setup for networking test in namespace nettest-9294
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:28:26.111: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:28:26.144: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:28.147: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:30.148: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:32.148: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:34.147: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:36.148: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:38.149: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:40.150: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:42.150: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:44.148: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:46.147: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:48.149: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:28:48.154: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:28:56.178: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:28:56.178: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:28:56.185: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:28:56.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9294" for this suite.


S [SKIPPING] [30.191 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:28:29.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
May  6 23:28:29.088: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:31.092: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:33.092: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:35.092: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:37.093: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:39.091: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:41.092: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
May  6 23:28:41.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4939 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
May  6 23:28:41.448: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n"
May  6 23:28:41.448: INFO: stdout: "iptables"
May  6 23:28:41.448: INFO: proxyMode: iptables
May  6 23:28:41.458: INFO: Waiting for pod kube-proxy-mode-detector to disappear
May  6 23:28:41.460: INFO: Pod kube-proxy-mode-detector no longer exists
STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-4939
May  6 23:28:41.466: INFO: sourceip-test cluster ip: 10.233.0.86
STEP: Picking 2 Nodes to test whether source IP is preserved or not
STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip
May  6 23:28:41.486: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:43.490: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:45.490: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:47.494: INFO: The status of Pod echo-sourceip is Running (Ready = true)
STEP: waiting up to 3m0s for service sourceip-test in namespace services-4939 to expose endpoints map[echo-sourceip:[8080]]
May  6 23:28:47.503: INFO: successfully validated that service sourceip-test in namespace services-4939 exposes endpoints map[echo-sourceip:[8080]]
STEP: Creating pause pod deployment
May  6 23:28:47.511: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
May  6 23:28:49.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476527, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476527, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476527, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476527, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-667db89d86\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 23:28:51.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476527, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476527, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476527, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476527, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-667db89d86\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 23:28:53.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476527, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476527, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476527, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476527, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-667db89d86\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 23:28:55.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476527, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476527, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476533, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476527, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-667db89d86\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 23:28:57.520: INFO: Waiting up to 2m0s to get response from 10.233.0.86:8080
May  6 23:28:57.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4939 exec pause-pod-667db89d86-9dwc8 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.0.86:8080/clientip'
May  6 23:28:57.904: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.0.86:8080/clientip\n"
May  6 23:28:57.904: INFO: stdout: "10.244.3.189:50048"
STEP: Verifying the preserved source ip
May  6 23:28:57.904: INFO: Waiting up to 2m0s to get response from 10.233.0.86:8080
May  6 23:28:57.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4939 exec pause-pod-667db89d86-ppwp8 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.0.86:8080/clientip'
May  6 23:28:58.162: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.0.86:8080/clientip\n"
May  6 23:28:58.162: INFO: stdout: "10.244.4.43:36122"
STEP: Verifying the preserved source ip
May  6 23:28:58.162: INFO: Deleting deployment
May  6 23:28:58.167: INFO: Cleaning up the echo server pod
May  6 23:28:58.175: INFO: Cleaning up the sourceip test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:28:58.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4939" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:29.142 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":2,"skipped":194,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:28:28.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212
STEP: Performing setup for networking test in namespace nettest-1123
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:28:28.750: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:28:28.783: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:30.788: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:32.788: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:34.789: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:36.787: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:38.788: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:40.787: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:42.788: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:44.788: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:46.787: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:48.787: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:50.787: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:28:50.792: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  6 23:28:52.796: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:29:00.831: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:29:00.831: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:29:00.838: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:00.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1123" for this suite.


S [SKIPPING] [32.218 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:28:26.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for service endpoints using hostNetwork
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474
STEP: Performing setup for networking test in namespace nettest-7687
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:28:26.547: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:28:26.578: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:28.584: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:30.589: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:32.583: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:34.582: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:36.583: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:38.584: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:40.582: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:42.583: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:44.629: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:46.583: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:28:48.823: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:28:48.827: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  6 23:28:50.830: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:29:00.867: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:29:00.867: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:29:00.874: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:00.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7687" for this suite.


S [SKIPPING] [34.446 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for service endpoints using hostNetwork [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:00.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should prevent NodePort collisions
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440
STEP: creating service nodeport-collision-1 with type NodePort in namespace services-4942
STEP: creating service nodeport-collision-2 with conflicting NodePort
STEP: deleting service nodeport-collision-1 to release NodePort
STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort
STEP: deleting service nodeport-collision-2 in namespace services-4942
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:00.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4942" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•S
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":1,"skipped":385,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:01.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename networkpolicies
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
May  6 23:29:01.096: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
May  6 23:29:01.099: INFO: starting watch
STEP: patching
STEP: updating
May  6 23:29:01.107: INFO: waiting for watch events with expected annotations
May  6 23:29:01.107: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
May  6 23:29:01.107: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:01.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-2052" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":2,"skipped":443,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:01.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename ingress
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69
May  6 23:29:01.232: INFO: Found ClusterRoles; assuming RBAC is enabled.
[BeforeEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688
May  6 23:29:01.337: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706
STEP: No ingress created, no cleanup necessary
[AfterEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:01.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-1656" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.144 seconds]
[sig-network] Loadbalancing: L7
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685
    should conform to Ingress spec [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722

    Only supported for providers [gce gke] (not local)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:28:22.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename network-perf
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
May  6 23:28:22.960: INFO: deploying iperf2 server
May  6 23:28:22.964: INFO: Waiting for deployment "iperf2-server-deployment" to complete
May  6 23:28:22.969: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
May  6 23:28:24.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476502, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476502, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476502, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787476502, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  6 23:28:26.985: INFO: waiting for iperf2 server endpoints
May  6 23:28:28.988: INFO: found iperf2 server endpoints
May  6 23:28:28.988: INFO: waiting for client pods to be running
May  6 23:28:36.993: INFO: all client pods are ready: 2 pods
May  6 23:28:36.995: INFO: server pod phase Running
May  6 23:28:36.995: INFO: server pod condition 0: {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-06 23:28:22 +0000 UTC Reason: Message:}
May  6 23:28:36.995: INFO: server pod condition 1: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-06 23:28:26 +0000 UTC Reason: Message:}
May  6 23:28:36.995: INFO: server pod condition 2: {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-06 23:28:26 +0000 UTC Reason: Message:}
May  6 23:28:36.995: INFO: server pod condition 3: {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-06 23:28:22 +0000 UTC Reason: Message:}
May  6 23:28:36.995: INFO: server pod container status 0: {Name:iperf2-server State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2022-05-06 23:28:24 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://9c74d394b128ba8d0ed1c95aa8af66be9d543c046a63838f39bac2b523470b3f Started:0xc0047ac06c}
May  6 23:28:36.996: INFO: found 2 matching client pods
May  6 23:28:36.998: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-5717 PodName:iperf2-clients-pb29z ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May  6 23:28:36.998: INFO: >>> kubeConfig: /root/.kube/config
May  6 23:28:37.338: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
May  6 23:28:37.338: INFO: iperf version: 
May  6 23:28:37.338: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-pb29z (node node2)
May  6 23:28:37.341: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-5717 PodName:iperf2-clients-pb29z ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May  6 23:28:37.341: INFO: >>> kubeConfig: /root/.kube/config
May  6 23:28:52.475: INFO: Exec stderr: ""
May  6 23:28:52.475: INFO: output from exec on client pod iperf2-clients-pb29z (node node2): 
20220506232838.439,10.244.4.37,33418,10.233.59.181,6789,3,0.0-1.0,119930880,959447040
20220506232839.448,10.244.4.37,33418,10.233.59.181,6789,3,1.0-2.0,118489088,947912704
20220506232840.436,10.244.4.37,33418,10.233.59.181,6789,3,2.0-3.0,117833728,942669824
20220506232841.428,10.244.4.37,33418,10.233.59.181,6789,3,3.0-4.0,116916224,935329792
20220506232842.437,10.244.4.37,33418,10.233.59.181,6789,3,4.0-5.0,113639424,909115392
20220506232843.445,10.244.4.37,33418,10.233.59.181,6789,3,5.0-6.0,110886912,887095296
20220506232844.453,10.244.4.37,33418,10.233.59.181,6789,3,6.0-7.0,114688000,917504000
20220506232845.442,10.244.4.37,33418,10.233.59.181,6789,3,7.0-8.0,117047296,936378368
20220506232846.432,10.244.4.37,33418,10.233.59.181,6789,3,8.0-9.0,116654080,933232640
20220506232847.440,10.244.4.37,33418,10.233.59.181,6789,3,9.0-10.0,117964800,943718400
20220506232847.440,10.244.4.37,33418,10.233.59.181,6789,3,0.0-10.0,1164050432,930700818

May  6 23:28:52.478: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-5717 PodName:iperf2-clients-ph5h8 ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May  6 23:28:52.478: INFO: >>> kubeConfig: /root/.kube/config
May  6 23:28:52.611: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
May  6 23:28:52.611: INFO: iperf version: 
May  6 23:28:52.611: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-ph5h8 (node node1)
May  6 23:28:52.614: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-5717 PodName:iperf2-clients-ph5h8 ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May  6 23:28:52.614: INFO: >>> kubeConfig: /root/.kube/config
May  6 23:29:07.781: INFO: Exec stderr: ""
May  6 23:29:07.781: INFO: output from exec on client pod iperf2-clients-ph5h8 (node node1): 
20220506232853.762,10.244.3.182,52262,10.233.59.181,6789,3,0.0-1.0,3405774848,27246198784
20220506232854.750,10.244.3.182,52262,10.233.59.181,6789,3,1.0-2.0,3459907584,27679260672
20220506232855.757,10.244.3.182,52262,10.233.59.181,6789,3,2.0-3.0,3449421824,27595374592
20220506232856.764,10.244.3.182,52262,10.233.59.181,6789,3,3.0-4.0,3340107776,26720862208
20220506232857.752,10.244.3.182,52262,10.233.59.181,6789,3,4.0-5.0,3418357760,27346862080
20220506232858.759,10.244.3.182,52262,10.233.59.181,6789,3,5.0-6.0,3365404672,26923237376
20220506232859.746,10.244.3.182,52262,10.233.59.181,6789,3,6.0-7.0,3461611520,27692892160
20220506232900.753,10.244.3.182,52262,10.233.59.181,6789,3,7.0-8.0,3473014784,27784118272
20220506232901.760,10.244.3.182,52262,10.233.59.181,6789,3,8.0-9.0,3337224192,26697793536
20220506232902.747,10.244.3.182,52262,10.233.59.181,6789,3,9.0-10.0,3372613632,26980909056
20220506232902.747,10.244.3.182,52262,10.233.59.181,6789,3,0.0-10.0,34083438592,27266682706

May  6 23:29:07.781: INFO:                                From                                 To    Bandwidth (MB/s)
May  6 23:29:07.781: INFO:                               node2                              node1                 111
May  6 23:29:07.781: INFO:                               node1                              node1                3250
[AfterEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:07.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "network-perf-5717" for this suite.


• [SLOW TEST:44.859 seconds]
[sig-network] Networking IPerf2 [Feature:Networking-Performance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
------------------------------
{"msg":"PASSED [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2","total":-1,"completed":1,"skipped":597,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:28:58.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
STEP: Preparing a test DNS service with injected DNS names...
May  6 23:28:58.242: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-f5945b70-d780-4add-9504-4a1d4031f048  dns-6628  3775acb2-cbab-4baf-8fc8-cbb7080e28f6 73804 0 2022-05-06 23:28:58 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2022-05-06 23:28:58 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-qm7pk,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-khvlt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-khvlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  6 23:29:02.252: INFO: testServerIP is 10.244.4.48
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
May  6 23:29:02.263: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils  dns-6628  0a09f7fa-a7a0-4bd7-85c5-7135e97c9ac9 73939 0 2022-05-06 23:29:02 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2022-05-06 23:29:02 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c9xs7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c9xs7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.4.48],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS option is configured on pod...
May  6 23:29:08.270: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-6628 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May  6 23:29:08.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized name server and search path are working...
May  6 23:29:08.353: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-6628 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
May  6 23:29:08.353: INFO: >>> kubeConfig: /root/.kube/config
May  6 23:29:08.494: INFO: Deleting pod e2e-dns-utils...
May  6 23:29:08.502: INFO: Deleting pod e2e-configmap-dns-server-f5945b70-d780-4add-9504-4a1d4031f048...
May  6 23:29:08.508: INFO: Deleting configmap e2e-coredns-configmap-qm7pk...
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:08.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6628" for this suite.


• [SLOW TEST:10.311 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":3,"skipped":199,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:28:43.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename no-snat-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
STEP: creating a test pod on each Node
STEP: waiting for all of the no-snat-test pods to be scheduled and running
STEP: sending traffic from each pod to the others and checking that SNAT does not occur
May  6 23:29:03.323: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
May  6 23:29:03.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-test2vqdt -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
May  6 23:29:03.585: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
May  6 23:29:03.585: INFO: stdout: "10.244.4.41:39070"
STEP: Verifying the preserved source ip
May  6 23:29:03.585: INFO: Waiting up to 2m0s to get response from 10.244.3.188:8080
May  6 23:29:03.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-test2vqdt -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.188:8080/clientip'
May  6 23:29:03.861: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.188:8080/clientip\n"
May  6 23:29:03.861: INFO: stdout: "10.244.4.41:56552"
STEP: Verifying the preserved source ip
May  6 23:29:03.861: INFO: Waiting up to 2m0s to get response from 10.244.0.7:8080
May  6 23:29:03.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-test2vqdt -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip'
May  6 23:29:04.123: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip\n"
May  6 23:29:04.123: INFO: stdout: "10.244.4.41:34906"
STEP: Verifying the preserved source ip
May  6 23:29:04.123: INFO: Waiting up to 2m0s to get response from 10.244.1.6:8080
May  6 23:29:04.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-test2vqdt -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip'
May  6 23:29:04.373: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip\n"
May  6 23:29:04.373: INFO: stdout: "10.244.4.41:54268"
STEP: Verifying the preserved source ip
May  6 23:29:04.373: INFO: Waiting up to 2m0s to get response from 10.244.4.41:8080
May  6 23:29:04.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-testl2w8f -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.41:8080/clientip'
May  6 23:29:04.615: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.41:8080/clientip\n"
May  6 23:29:04.615: INFO: stdout: "10.244.2.5:48696"
STEP: Verifying the preserved source ip
May  6 23:29:04.615: INFO: Waiting up to 2m0s to get response from 10.244.3.188:8080
May  6 23:29:04.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-testl2w8f -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.188:8080/clientip'
May  6 23:29:04.900: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.188:8080/clientip\n"
May  6 23:29:04.900: INFO: stdout: "10.244.2.5:35590"
STEP: Verifying the preserved source ip
May  6 23:29:04.900: INFO: Waiting up to 2m0s to get response from 10.244.0.7:8080
May  6 23:29:04.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-testl2w8f -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip'
May  6 23:29:05.149: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip\n"
May  6 23:29:05.149: INFO: stdout: "10.244.2.5:45162"
STEP: Verifying the preserved source ip
May  6 23:29:05.149: INFO: Waiting up to 2m0s to get response from 10.244.1.6:8080
May  6 23:29:05.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-testl2w8f -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip'
May  6 23:29:05.394: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip\n"
May  6 23:29:05.394: INFO: stdout: "10.244.2.5:58626"
STEP: Verifying the preserved source ip
May  6 23:29:05.395: INFO: Waiting up to 2m0s to get response from 10.244.4.41:8080
May  6 23:29:05.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-testqf478 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.41:8080/clientip'
May  6 23:29:05.698: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.41:8080/clientip\n"
May  6 23:29:05.698: INFO: stdout: "10.244.3.188:59420"
STEP: Verifying the preserved source ip
May  6 23:29:05.698: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
May  6 23:29:05.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-testqf478 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
May  6 23:29:06.045: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
May  6 23:29:06.045: INFO: stdout: "10.244.3.188:36244"
STEP: Verifying the preserved source ip
May  6 23:29:06.045: INFO: Waiting up to 2m0s to get response from 10.244.0.7:8080
May  6 23:29:06.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-testqf478 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip'
May  6 23:29:06.910: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip\n"
May  6 23:29:06.910: INFO: stdout: "10.244.3.188:48666"
STEP: Verifying the preserved source ip
May  6 23:29:06.910: INFO: Waiting up to 2m0s to get response from 10.244.1.6:8080
May  6 23:29:06.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-testqf478 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip'
May  6 23:29:07.336: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip\n"
May  6 23:29:07.337: INFO: stdout: "10.244.3.188:32930"
STEP: Verifying the preserved source ip
May  6 23:29:07.337: INFO: Waiting up to 2m0s to get response from 10.244.4.41:8080
May  6 23:29:07.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-testtz6vm -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.41:8080/clientip'
May  6 23:29:07.591: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.41:8080/clientip\n"
May  6 23:29:07.591: INFO: stdout: "10.244.0.7:42940"
STEP: Verifying the preserved source ip
May  6 23:29:07.591: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
May  6 23:29:07.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-testtz6vm -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
May  6 23:29:07.858: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
May  6 23:29:07.858: INFO: stdout: "10.244.0.7:54802"
STEP: Verifying the preserved source ip
May  6 23:29:07.858: INFO: Waiting up to 2m0s to get response from 10.244.3.188:8080
May  6 23:29:07.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-testtz6vm -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.188:8080/clientip'
May  6 23:29:08.092: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.188:8080/clientip\n"
May  6 23:29:08.092: INFO: stdout: "10.244.0.7:43408"
STEP: Verifying the preserved source ip
May  6 23:29:08.092: INFO: Waiting up to 2m0s to get response from 10.244.1.6:8080
May  6 23:29:08.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-testtz6vm -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip'
May  6 23:29:08.355: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip\n"
May  6 23:29:08.355: INFO: stdout: "10.244.0.7:45704"
STEP: Verifying the preserved source ip
May  6 23:29:08.355: INFO: Waiting up to 2m0s to get response from 10.244.4.41:8080
May  6 23:29:08.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-testzjqjr -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.41:8080/clientip'
May  6 23:29:08.620: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.41:8080/clientip\n"
May  6 23:29:08.620: INFO: stdout: "10.244.1.6:35364"
STEP: Verifying the preserved source ip
May  6 23:29:08.620: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
May  6 23:29:08.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-testzjqjr -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
May  6 23:29:08.853: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
May  6 23:29:08.853: INFO: stdout: "10.244.1.6:46566"
STEP: Verifying the preserved source ip
May  6 23:29:08.853: INFO: Waiting up to 2m0s to get response from 10.244.3.188:8080
May  6 23:29:08.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-testzjqjr -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.188:8080/clientip'
May  6 23:29:09.106: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.188:8080/clientip\n"
May  6 23:29:09.106: INFO: stdout: "10.244.1.6:34046"
STEP: Verifying the preserved source ip
May  6 23:29:09.106: INFO: Waiting up to 2m0s to get response from 10.244.0.7:8080
May  6 23:29:09.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5662 exec no-snat-testzjqjr -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip'
May  6 23:29:09.344: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.7:8080/clientip\n"
May  6 23:29:09.344: INFO: stdout: "10.244.1.6:57820"
STEP: Verifying the preserved source ip
[AfterEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:09.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "no-snat-test-5662" for this suite.


• [SLOW TEST:26.164 seconds]
[sig-network] NoSNAT [Feature:NoSNAT] [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
------------------------------
{"msg":"PASSED [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT","total":-1,"completed":3,"skipped":283,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:27:55.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
W0506 23:27:55.610382      36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
May  6 23:27:55.610: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
May  6 23:27:55.612: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
May  6 23:27:55.633: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
May  6 23:27:57.637: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
May  6 23:27:59.640: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:01.637: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:03.637: INFO: The status of Pod boom-server is Running (Ready = true)
STEP: Server pod created on node node2
STEP: Server service created
May  6 23:28:03.657: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:05.661: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:07.660: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:09.662: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:11.661: INFO: The status of Pod startup-script is Running (Ready = true)
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
May  6 23:29:11.705: INFO: boom-server pod logs: 2022/05/06 23:28:01 external ip: 10.244.4.19
2022/05/06 23:28:01 listen on 0.0.0.0:9000
2022/05/06 23:28:01 probing 10.244.4.19
2022/05/06 23:28:09 tcp packet: &{SrcPort:41217 DestPort:9000 Seq:3367791291 Ack:0 Flags:40962 WindowSize:29200 Checksum:13946 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:09 tcp packet: &{SrcPort:41217 DestPort:9000 Seq:3367791292 Ack:163555931 Flags:32784 WindowSize:229 Checksum:39190 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:09 connection established
2022/05/06 23:28:09 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 161 1 9 190 35 187 200 188 106 188 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:09 checksumer: &{sum:517247 oddByte:33 length:39}
2022/05/06 23:28:09 ret:  517280
2022/05/06 23:28:09 ret:  58535
2022/05/06 23:28:09 ret:  58535
2022/05/06 23:28:09 boom packet injected
2022/05/06 23:28:09 tcp packet: &{SrcPort:41217 DestPort:9000 Seq:3367791292 Ack:163555931 Flags:32785 WindowSize:229 Checksum:39189 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:11 tcp packet: &{SrcPort:45055 DestPort:9000 Seq:916176191 Ack:0 Flags:40962 WindowSize:29200 Checksum:24392 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:11 tcp packet: &{SrcPort:45055 DestPort:9000 Seq:916176192 Ack:544549413 Flags:32784 WindowSize:229 Checksum:9110 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:11 connection established
2022/05/06 23:28:11 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 175 255 32 115 163 133 54 155 189 64 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:11 checksumer: &{sum:509157 oddByte:33 length:39}
2022/05/06 23:28:11 ret:  509190
2022/05/06 23:28:11 ret:  50445
2022/05/06 23:28:11 ret:  50445
2022/05/06 23:28:11 boom packet injected
2022/05/06 23:28:11 tcp packet: &{SrcPort:45055 DestPort:9000 Seq:916176192 Ack:544549413 Flags:32785 WindowSize:229 Checksum:9109 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:13 tcp packet: &{SrcPort:41552 DestPort:9000 Seq:1524621863 Ack:0 Flags:40962 WindowSize:29200 Checksum:7162 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:13 tcp packet: &{SrcPort:41552 DestPort:9000 Seq:1524621864 Ack:2524154578 Flags:32784 WindowSize:229 Checksum:459 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:13 connection established
2022/05/06 23:28:13 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 162 80 150 114 4 50 90 223 226 40 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:13 checksumer: &{sum:454136 oddByte:33 length:39}
2022/05/06 23:28:13 ret:  454169
2022/05/06 23:28:13 ret:  60959
2022/05/06 23:28:13 ret:  60959
2022/05/06 23:28:13 boom packet injected
2022/05/06 23:28:13 tcp packet: &{SrcPort:41552 DestPort:9000 Seq:1524621864 Ack:2524154578 Flags:32785 WindowSize:229 Checksum:458 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:15 tcp packet: &{SrcPort:38772 DestPort:9000 Seq:2841151287 Ack:0 Flags:40962 WindowSize:29200 Checksum:11133 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:15 tcp packet: &{SrcPort:38772 DestPort:9000 Seq:2841151288 Ack:1306012828 Flags:32784 WindowSize:229 Checksum:45135 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:15 connection established
2022/05/06 23:28:15 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 151 116 77 214 165 252 169 88 135 56 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:15 checksumer: &{sum:510265 oddByte:33 length:39}
2022/05/06 23:28:15 ret:  510298
2022/05/06 23:28:15 ret:  51553
2022/05/06 23:28:15 ret:  51553
2022/05/06 23:28:15 boom packet injected
2022/05/06 23:28:15 tcp packet: &{SrcPort:38772 DestPort:9000 Seq:2841151288 Ack:1306012828 Flags:32785 WindowSize:229 Checksum:45134 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:17 tcp packet: &{SrcPort:37475 DestPort:9000 Seq:1383796792 Ack:0 Flags:40962 WindowSize:29200 Checksum:63129 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:17 tcp packet: &{SrcPort:37475 DestPort:9000 Seq:1383796793 Ack:771606912 Flags:32784 WindowSize:229 Checksum:63121 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:17 connection established
2022/05/06 23:28:17 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 146 99 45 252 66 224 82 123 16 57 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:17 checksumer: &{sum:517347 oddByte:33 length:39}
2022/05/06 23:28:17 ret:  517380
2022/05/06 23:28:17 ret:  58635
2022/05/06 23:28:17 ret:  58635
2022/05/06 23:28:17 boom packet injected
2022/05/06 23:28:17 tcp packet: &{SrcPort:37475 DestPort:9000 Seq:1383796793 Ack:771606912 Flags:32785 WindowSize:229 Checksum:63120 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:19 tcp packet: &{SrcPort:41217 DestPort:9000 Seq:3367791293 Ack:163555932 Flags:32784 WindowSize:229 Checksum:19197 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:19 tcp packet: &{SrcPort:33268 DestPort:9000 Seq:79931483 Ack:0 Flags:40962 WindowSize:29200 Checksum:46245 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:19 tcp packet: &{SrcPort:33268 DestPort:9000 Seq:79931484 Ack:3335036413 Flags:32784 WindowSize:229 Checksum:16221 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:19 connection established
2022/05/06 23:28:19 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 129 244 198 199 23 93 4 195 168 92 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:19 checksumer: &{sum:534922 oddByte:33 length:39}
2022/05/06 23:28:19 ret:  534955
2022/05/06 23:28:19 ret:  10675
2022/05/06 23:28:19 ret:  10675
2022/05/06 23:28:19 boom packet injected
2022/05/06 23:28:19 tcp packet: &{SrcPort:33268 DestPort:9000 Seq:79931484 Ack:3335036413 Flags:32785 WindowSize:229 Checksum:16220 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:21 tcp packet: &{SrcPort:45055 DestPort:9000 Seq:916176193 Ack:544549414 Flags:32784 WindowSize:229 Checksum:54643 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:21 tcp packet: &{SrcPort:35503 DestPort:9000 Seq:4081478320 Ack:0 Flags:40962 WindowSize:29200 Checksum:62311 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:21 tcp packet: &{SrcPort:35503 DestPort:9000 Seq:4081478321 Ack:3073172697 Flags:32784 WindowSize:229 Checksum:16181 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:21 connection established
2022/05/06 23:28:21 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 138 175 183 43 94 57 243 70 106 177 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:21 checksumer: &{sum:458108 oddByte:33 length:39}
2022/05/06 23:28:21 ret:  458141
2022/05/06 23:28:21 ret:  64931
2022/05/06 23:28:21 ret:  64931
2022/05/06 23:28:21 boom packet injected
2022/05/06 23:28:21 tcp packet: &{SrcPort:35503 DestPort:9000 Seq:4081478321 Ack:3073172697 Flags:32785 WindowSize:229 Checksum:16180 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:23 tcp packet: &{SrcPort:41552 DestPort:9000 Seq:1524621865 Ack:2524154579 Flags:32784 WindowSize:229 Checksum:45988 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:23 tcp packet: &{SrcPort:33782 DestPort:9000 Seq:3456292278 Ack:0 Flags:40962 WindowSize:29200 Checksum:44174 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:23 tcp packet: &{SrcPort:33782 DestPort:9000 Seq:3456292279 Ack:2579624321 Flags:32784 WindowSize:229 Checksum:335 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:23 connection established
2022/05/06 23:28:23 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 131 246 153 192 106 225 206 2 213 183 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:23 checksumer: &{sum:541609 oddByte:33 length:39}
2022/05/06 23:28:23 ret:  541642
2022/05/06 23:28:23 ret:  17362
2022/05/06 23:28:23 ret:  17362
2022/05/06 23:28:23 boom packet injected
2022/05/06 23:28:23 tcp packet: &{SrcPort:33782 DestPort:9000 Seq:3456292279 Ack:2579624321 Flags:32785 WindowSize:229 Checksum:334 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:25 tcp packet: &{SrcPort:38772 DestPort:9000 Seq:2841151289 Ack:1306012829 Flags:32784 WindowSize:229 Checksum:25131 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:25 tcp packet: &{SrcPort:42441 DestPort:9000 Seq:1339689822 Ack:0 Flags:40962 WindowSize:29200 Checksum:52075 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:25 tcp packet: &{SrcPort:42441 DestPort:9000 Seq:1339689823 Ack:1078943058 Flags:32784 WindowSize:229 Checksum:1533 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:25 connection established
2022/05/06 23:28:25 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 165 201 64 77 214 178 79 218 11 95 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:25 checksumer: &{sum:521109 oddByte:33 length:39}
2022/05/06 23:28:25 ret:  521142
2022/05/06 23:28:25 ret:  62397
2022/05/06 23:28:25 ret:  62397
2022/05/06 23:28:25 boom packet injected
2022/05/06 23:28:25 tcp packet: &{SrcPort:42441 DestPort:9000 Seq:1339689823 Ack:1078943058 Flags:32785 WindowSize:229 Checksum:1532 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:27 tcp packet: &{SrcPort:37475 DestPort:9000 Seq:1383796794 Ack:771606913 Flags:32784 WindowSize:229 Checksum:43117 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:27 tcp packet: &{SrcPort:40508 DestPort:9000 Seq:2179841385 Ack:0 Flags:40962 WindowSize:29200 Checksum:59144 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:27 tcp packet: &{SrcPort:40508 DestPort:9000 Seq:2179841386 Ack:3612228814 Flags:32784 WindowSize:229 Checksum:41805 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:27 connection established
2022/05/06 23:28:27 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 158 60 215 76 182 46 129 237 189 106 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:27 checksumer: &{sum:458985 oddByte:33 length:39}
2022/05/06 23:28:27 ret:  459018
2022/05/06 23:28:27 ret:  273
2022/05/06 23:28:27 ret:  273
2022/05/06 23:28:27 boom packet injected
2022/05/06 23:28:27 tcp packet: &{SrcPort:40508 DestPort:9000 Seq:2179841386 Ack:3612228814 Flags:32785 WindowSize:229 Checksum:41804 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:29 tcp packet: &{SrcPort:34826 DestPort:9000 Seq:2456501614 Ack:0 Flags:40962 WindowSize:29200 Checksum:25832 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:29 tcp packet: &{SrcPort:34826 DestPort:9000 Seq:2456501615 Ack:1734535670 Flags:32784 WindowSize:229 Checksum:56351 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:29 connection established
2022/05/06 23:28:29 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 136 10 103 97 99 86 146 107 61 111 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:29 checksumer: &{sum:429473 oddByte:33 length:39}
2022/05/06 23:28:29 ret:  429506
2022/05/06 23:28:29 ret:  36296
2022/05/06 23:28:29 ret:  36296
2022/05/06 23:28:29 boom packet injected
2022/05/06 23:28:29 tcp packet: &{SrcPort:34826 DestPort:9000 Seq:2456501615 Ack:1734535670 Flags:32785 WindowSize:229 Checksum:56350 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:29 tcp packet: &{SrcPort:33268 DestPort:9000 Seq:79931485 Ack:3335036414 Flags:32784 WindowSize:229 Checksum:61752 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:31 tcp packet: &{SrcPort:35503 DestPort:9000 Seq:4081478322 Ack:3073172698 Flags:32784 WindowSize:229 Checksum:61714 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:31 tcp packet: &{SrcPort:43266 DestPort:9000 Seq:2628138455 Ack:0 Flags:40962 WindowSize:29200 Checksum:14715 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:31 tcp packet: &{SrcPort:43266 DestPort:9000 Seq:2628138456 Ack:2085006840 Flags:32784 WindowSize:229 Checksum:53244 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:31 connection established
2022/05/06 23:28:31 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 169 2 124 69 39 88 156 166 53 216 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:31 checksumer: &{sum:462749 oddByte:33 length:39}
2022/05/06 23:28:31 ret:  462782
2022/05/06 23:28:31 ret:  4037
2022/05/06 23:28:31 ret:  4037
2022/05/06 23:28:31 boom packet injected
2022/05/06 23:28:31 tcp packet: &{SrcPort:43266 DestPort:9000 Seq:2628138456 Ack:2085006840 Flags:32785 WindowSize:229 Checksum:53243 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:33 tcp packet: &{SrcPort:33782 DestPort:9000 Seq:3456292280 Ack:2579624322 Flags:32784 WindowSize:229 Checksum:45868 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:33 tcp packet: &{SrcPort:37960 DestPort:9000 Seq:3111653891 Ack:0 Flags:40962 WindowSize:29200 Checksum:19814 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:33 tcp packet: &{SrcPort:37960 DestPort:9000 Seq:3111653892 Ack:846654835 Flags:32784 WindowSize:229 Checksum:58987 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:33 connection established
2022/05/06 23:28:33 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 148 72 50 117 102 211 185 120 18 4 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:33 checksumer: &{sum:458359 oddByte:33 length:39}
2022/05/06 23:28:33 ret:  458392
2022/05/06 23:28:33 ret:  65182
2022/05/06 23:28:33 ret:  65182
2022/05/06 23:28:33 boom packet injected
2022/05/06 23:28:33 tcp packet: &{SrcPort:37960 DestPort:9000 Seq:3111653892 Ack:846654835 Flags:32785 WindowSize:229 Checksum:58986 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:35 tcp packet: &{SrcPort:42441 DestPort:9000 Seq:1339689824 Ack:1078943059 Flags:32784 WindowSize:229 Checksum:47065 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:35 tcp packet: &{SrcPort:39554 DestPort:9000 Seq:2928183445 Ack:0 Flags:40962 WindowSize:29200 Checksum:54201 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:35 tcp packet: &{SrcPort:39554 DestPort:9000 Seq:2928183446 Ack:58878650 Flags:32784 WindowSize:229 Checksum:5789 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:35 connection established
2022/05/06 23:28:35 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 154 130 3 128 228 26 174 136 136 150 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:35 checksumer: &{sum:470327 oddByte:33 length:39}
2022/05/06 23:28:35 ret:  470360
2022/05/06 23:28:35 ret:  11615
2022/05/06 23:28:35 ret:  11615
2022/05/06 23:28:35 boom packet injected
2022/05/06 23:28:35 tcp packet: &{SrcPort:39554 DestPort:9000 Seq:2928183446 Ack:58878650 Flags:32785 WindowSize:229 Checksum:5788 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:37 tcp packet: &{SrcPort:40508 DestPort:9000 Seq:2179841387 Ack:3612228815 Flags:32784 WindowSize:229 Checksum:21800 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:37 tcp packet: &{SrcPort:39057 DestPort:9000 Seq:1938631470 Ack:0 Flags:40962 WindowSize:29200 Checksum:26172 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:37 tcp packet: &{SrcPort:39057 DestPort:9000 Seq:1938631471 Ack:1512596338 Flags:32784 WindowSize:229 Checksum:20975 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:37 connection established
2022/05/06 23:28:37 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 152 145 90 38 220 210 115 141 43 47 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:37 checksumer: &{sum:473068 oddByte:33 length:39}
2022/05/06 23:28:37 ret:  473101
2022/05/06 23:28:37 ret:  14356
2022/05/06 23:28:37 ret:  14356
2022/05/06 23:28:37 boom packet injected
2022/05/06 23:28:37 tcp packet: &{SrcPort:39057 DestPort:9000 Seq:1938631471 Ack:1512596338 Flags:32785 WindowSize:229 Checksum:20974 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:39 tcp packet: &{SrcPort:34826 DestPort:9000 Seq:2456501616 Ack:1734535671 Flags:32784 WindowSize:229 Checksum:36346 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:39 tcp packet: &{SrcPort:34982 DestPort:9000 Seq:2848671542 Ack:0 Flags:40962 WindowSize:29200 Checksum:7183 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:39 tcp packet: &{SrcPort:34982 DestPort:9000 Seq:2848671543 Ack:2158331063 Flags:32784 WindowSize:229 Checksum:47148 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:39 connection established
2022/05/06 23:28:39 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 136 166 128 163 254 23 169 203 71 55 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:39 checksumer: &{sum:480630 oddByte:33 length:39}
2022/05/06 23:28:39 ret:  480663
2022/05/06 23:28:39 ret:  21918
2022/05/06 23:28:39 ret:  21918
2022/05/06 23:28:39 boom packet injected
2022/05/06 23:28:39 tcp packet: &{SrcPort:34982 DestPort:9000 Seq:2848671543 Ack:2158331063 Flags:32785 WindowSize:229 Checksum:47147 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:41 tcp packet: &{SrcPort:43266 DestPort:9000 Seq:2628138457 Ack:2085006841 Flags:32784 WindowSize:229 Checksum:33239 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:41 tcp packet: &{SrcPort:34766 DestPort:9000 Seq:1744162734 Ack:0 Flags:40962 WindowSize:29200 Checksum:52853 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:41 tcp packet: &{SrcPort:34766 DestPort:9000 Seq:1744162735 Ack:3236756596 Flags:32784 WindowSize:229 Checksum:43706 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:41 connection established
2022/05/06 23:28:41 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 135 206 192 235 117 212 103 245 207 175 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:41 checksumer: &{sum:599154 oddByte:33 length:39}
2022/05/06 23:28:41 ret:  599187
2022/05/06 23:28:41 ret:  9372
2022/05/06 23:28:41 ret:  9372
2022/05/06 23:28:41 boom packet injected
2022/05/06 23:28:41 tcp packet: &{SrcPort:34766 DestPort:9000 Seq:1744162735 Ack:3236756596 Flags:32785 WindowSize:229 Checksum:43705 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:43 tcp packet: &{SrcPort:37960 DestPort:9000 Seq:3111653893 Ack:846654836 Flags:32784 WindowSize:229 Checksum:38983 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:43 tcp packet: &{SrcPort:43916 DestPort:9000 Seq:1854576811 Ack:0 Flags:40962 WindowSize:29200 Checksum:54098 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:43 tcp packet: &{SrcPort:43916 DestPort:9000 Seq:1854576812 Ack:2886695400 Flags:32784 WindowSize:229 Checksum:16178 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:43 connection established
2022/05/06 23:28:43 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 171 140 172 13 243 72 110 138 152 172 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:43 checksumer: &{sum:461520 oddByte:33 length:39}
2022/05/06 23:28:43 ret:  461553
2022/05/06 23:28:43 ret:  2808
2022/05/06 23:28:43 ret:  2808
2022/05/06 23:28:43 boom packet injected
2022/05/06 23:28:43 tcp packet: &{SrcPort:43916 DestPort:9000 Seq:1854576812 Ack:2886695400 Flags:32785 WindowSize:229 Checksum:16177 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:45 tcp packet: &{SrcPort:39554 DestPort:9000 Seq:2928183447 Ack:58878651 Flags:32784 WindowSize:229 Checksum:51318 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:45 tcp packet: &{SrcPort:37884 DestPort:9000 Seq:1162747736 Ack:0 Flags:40962 WindowSize:29200 Checksum:34183 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:45 tcp packet: &{SrcPort:37884 DestPort:9000 Seq:1162747737 Ack:3150537955 Flags:32784 WindowSize:229 Checksum:61122 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:45 connection established
2022/05/06 23:28:45 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 147 252 187 199 222 67 69 78 31 89 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:45 checksumer: &{sum:499728 oddByte:33 length:39}
2022/05/06 23:28:45 ret:  499761
2022/05/06 23:28:45 ret:  41016
2022/05/06 23:28:45 ret:  41016
2022/05/06 23:28:45 boom packet injected
2022/05/06 23:28:45 tcp packet: &{SrcPort:37884 DestPort:9000 Seq:1162747737 Ack:3150537955 Flags:32785 WindowSize:229 Checksum:61121 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:47 tcp packet: &{SrcPort:39057 DestPort:9000 Seq:1938631472 Ack:1512596339 Flags:32784 WindowSize:229 Checksum:970 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:47 tcp packet: &{SrcPort:42534 DestPort:9000 Seq:912917879 Ack:0 Flags:40962 WindowSize:29200 Checksum:37973 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:47 tcp packet: &{SrcPort:42534 DestPort:9000 Seq:912917880 Ack:3713620283 Flags:32784 WindowSize:229 Checksum:57314 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:47 connection established
2022/05/06 23:28:47 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 166 38 221 87 210 155 54 106 5 120 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:47 checksumer: &{sum:453904 oddByte:33 length:39}
2022/05/06 23:28:47 ret:  453937
2022/05/06 23:28:47 ret:  60727
2022/05/06 23:28:47 ret:  60727
2022/05/06 23:28:47 boom packet injected
2022/05/06 23:28:47 tcp packet: &{SrcPort:42534 DestPort:9000 Seq:912917880 Ack:3713620283 Flags:32785 WindowSize:229 Checksum:57313 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:49 tcp packet: &{SrcPort:34982 DestPort:9000 Seq:2848671544 Ack:2158331064 Flags:32784 WindowSize:229 Checksum:27142 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:49 tcp packet: &{SrcPort:45056 DestPort:9000 Seq:501275616 Ack:0 Flags:40962 WindowSize:29200 Checksum:50378 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:49 tcp packet: &{SrcPort:45056 DestPort:9000 Seq:501275617 Ack:3434628762 Flags:32784 WindowSize:229 Checksum:11208 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:49 connection established
2022/05/06 23:28:49 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 176 0 204 182 191 250 29 224 219 225 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:49 checksumer: &{sum:550067 oddByte:33 length:39}
2022/05/06 23:28:49 ret:  550100
2022/05/06 23:28:49 ret:  25820
2022/05/06 23:28:49 ret:  25820
2022/05/06 23:28:49 boom packet injected
2022/05/06 23:28:49 tcp packet: &{SrcPort:45056 DestPort:9000 Seq:501275617 Ack:3434628762 Flags:32785 WindowSize:229 Checksum:11207 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:51 tcp packet: &{SrcPort:34766 DestPort:9000 Seq:1744162736 Ack:3236756597 Flags:32784 WindowSize:229 Checksum:23695 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:51 tcp packet: &{SrcPort:34463 DestPort:9000 Seq:4223747557 Ack:0 Flags:40962 WindowSize:29200 Checksum:40565 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:51 tcp packet: &{SrcPort:34463 DestPort:9000 Seq:4223747558 Ack:3554458703 Flags:32784 WindowSize:229 Checksum:32970 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:51 connection established
2022/05/06 23:28:51 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 134 159 211 219 53 175 251 193 69 230 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:51 checksumer: &{sum:574286 oddByte:33 length:39}
2022/05/06 23:28:51 ret:  574319
2022/05/06 23:28:51 ret:  50039
2022/05/06 23:28:51 ret:  50039
2022/05/06 23:28:51 boom packet injected
2022/05/06 23:28:51 tcp packet: &{SrcPort:34463 DestPort:9000 Seq:4223747558 Ack:3554458703 Flags:32785 WindowSize:229 Checksum:32969 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:53 tcp packet: &{SrcPort:43916 DestPort:9000 Seq:1854576813 Ack:2886695401 Flags:32784 WindowSize:229 Checksum:61706 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:53 tcp packet: &{SrcPort:46241 DestPort:9000 Seq:2784617115 Ack:0 Flags:40962 WindowSize:29200 Checksum:8629 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:53 tcp packet: &{SrcPort:46241 DestPort:9000 Seq:2784617116 Ack:3498920682 Flags:32784 WindowSize:229 Checksum:28908 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:53 connection established
2022/05/06 23:28:53 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 180 161 208 139 196 74 165 249 226 156 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:53 checksumer: &{sum:524111 oddByte:33 length:39}
2022/05/06 23:28:53 ret:  524144
2022/05/06 23:28:53 ret:  65399
2022/05/06 23:28:53 ret:  65399
2022/05/06 23:28:53 boom packet injected
2022/05/06 23:28:53 tcp packet: &{SrcPort:46241 DestPort:9000 Seq:2784617116 Ack:3498920682 Flags:32785 WindowSize:229 Checksum:28907 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:55 tcp packet: &{SrcPort:45045 DestPort:9000 Seq:1061835815 Ack:0 Flags:40962 WindowSize:29200 Checksum:5043 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:55 tcp packet: &{SrcPort:45045 DestPort:9000 Seq:1061835816 Ack:2786599912 Flags:32784 WindowSize:229 Checksum:44177 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:55 connection established
2022/05/06 23:28:55 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 175 245 166 22 157 72 63 74 84 40 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:55 checksumer: &{sum:440325 oddByte:33 length:39}
2022/05/06 23:28:55 ret:  440358
2022/05/06 23:28:55 ret:  47148
2022/05/06 23:28:55 ret:  47148
2022/05/06 23:28:55 boom packet injected
2022/05/06 23:28:55 tcp packet: &{SrcPort:45045 DestPort:9000 Seq:1061835816 Ack:2786599912 Flags:32785 WindowSize:229 Checksum:44176 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:55 tcp packet: &{SrcPort:37884 DestPort:9000 Seq:1162747738 Ack:3150537956 Flags:32784 WindowSize:229 Checksum:41113 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:57 tcp packet: &{SrcPort:42534 DestPort:9000 Seq:912917881 Ack:3713620284 Flags:32784 WindowSize:229 Checksum:37310 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:57 tcp packet: &{SrcPort:41498 DestPort:9000 Seq:2508619421 Ack:0 Flags:40962 WindowSize:29200 Checksum:39179 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:57 tcp packet: &{SrcPort:41498 DestPort:9000 Seq:2508619422 Ack:2526244315 Flags:32784 WindowSize:229 Checksum:61353 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:57 connection established
2022/05/06 23:28:57 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 162 26 150 145 231 59 149 134 126 158 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:57 checksumer: &{sum:458162 oddByte:33 length:39}
2022/05/06 23:28:57 ret:  458195
2022/05/06 23:28:57 ret:  64985
2022/05/06 23:28:57 ret:  64985
2022/05/06 23:28:57 boom packet injected
2022/05/06 23:28:57 tcp packet: &{SrcPort:41498 DestPort:9000 Seq:2508619422 Ack:2526244315 Flags:32785 WindowSize:229 Checksum:61352 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:59 tcp packet: &{SrcPort:45056 DestPort:9000 Seq:501275618 Ack:3434628763 Flags:32784 WindowSize:229 Checksum:56740 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:59 tcp packet: &{SrcPort:41264 DestPort:9000 Seq:760084714 Ack:0 Flags:40962 WindowSize:29200 Checksum:32784 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:28:59 tcp packet: &{SrcPort:41264 DestPort:9000 Seq:760084715 Ack:3466105315 Flags:32784 WindowSize:229 Checksum:29394 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:28:59 connection established
2022/05/06 23:28:59 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 161 48 206 151 11 67 45 77 248 235 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:28:59 checksumer: &{sum:472351 oddByte:33 length:39}
2022/05/06 23:28:59 ret:  472384
2022/05/06 23:28:59 ret:  13639
2022/05/06 23:28:59 ret:  13639
2022/05/06 23:28:59 boom packet injected
2022/05/06 23:28:59 tcp packet: &{SrcPort:41264 DestPort:9000 Seq:760084715 Ack:3466105315 Flags:32785 WindowSize:229 Checksum:29393 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:29:01 tcp packet: &{SrcPort:34463 DestPort:9000 Seq:4223747559 Ack:3554458704 Flags:32784 WindowSize:229 Checksum:12966 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:29:01 tcp packet: &{SrcPort:41949 DestPort:9000 Seq:3104629911 Ack:0 Flags:40962 WindowSize:29200 Checksum:65061 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:29:01 tcp packet: &{SrcPort:41949 DestPort:9000 Seq:3104629912 Ack:2423076267 Flags:32784 WindowSize:229 Checksum:33658 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:29:01 connection established
2022/05/06 23:29:01 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 163 221 144 107 175 11 185 12 228 152 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:29:01 checksumer: &{sum:453375 oddByte:33 length:39}
2022/05/06 23:29:01 ret:  453408
2022/05/06 23:29:01 ret:  60198
2022/05/06 23:29:01 ret:  60198
2022/05/06 23:29:01 boom packet injected
2022/05/06 23:29:01 tcp packet: &{SrcPort:41949 DestPort:9000 Seq:3104629912 Ack:2423076267 Flags:32785 WindowSize:229 Checksum:33657 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:29:03 tcp packet: &{SrcPort:46241 DestPort:9000 Seq:2784617117 Ack:3498920683 Flags:32784 WindowSize:229 Checksum:8906 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:29:03 tcp packet: &{SrcPort:36988 DestPort:9000 Seq:3063595453 Ack:0 Flags:40962 WindowSize:29200 Checksum:12035 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:29:03 tcp packet: &{SrcPort:36988 DestPort:9000 Seq:3063595454 Ack:3256640449 Flags:32784 WindowSize:229 Checksum:19648 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:29:03 connection established
2022/05/06 23:29:03 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 144 124 194 26 221 33 182 154 193 190 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:29:03 checksumer: &{sum:459558 oddByte:33 length:39}
2022/05/06 23:29:03 ret:  459591
2022/05/06 23:29:03 ret:  846
2022/05/06 23:29:03 ret:  846
2022/05/06 23:29:03 boom packet injected
2022/05/06 23:29:03 tcp packet: &{SrcPort:36988 DestPort:9000 Seq:3063595454 Ack:3256640449 Flags:32785 WindowSize:229 Checksum:19647 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:29:05 tcp packet: &{SrcPort:45045 DestPort:9000 Seq:1061835817 Ack:2786599913 Flags:32784 WindowSize:229 Checksum:24173 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:29:05 tcp packet: &{SrcPort:43492 DestPort:9000 Seq:3195703368 Ack:0 Flags:40962 WindowSize:29200 Checksum:14175 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:29:05 tcp packet: &{SrcPort:43492 DestPort:9000 Seq:3195703369 Ack:3589021733 Flags:32784 WindowSize:229 Checksum:32025 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:29:05 connection established
2022/05/06 23:29:05 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 169 228 213 234 153 133 190 122 144 73 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:29:05 checksumer: &{sum:526821 oddByte:33 length:39}
2022/05/06 23:29:05 ret:  526854
2022/05/06 23:29:05 ret:  2574
2022/05/06 23:29:05 ret:  2574
2022/05/06 23:29:05 boom packet injected
2022/05/06 23:29:05 tcp packet: &{SrcPort:43492 DestPort:9000 Seq:3195703369 Ack:3589021733 Flags:32785 WindowSize:229 Checksum:32024 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:29:07 tcp packet: &{SrcPort:41498 DestPort:9000 Seq:2508619423 Ack:2526244316 Flags:32784 WindowSize:229 Checksum:41351 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:29:07 tcp packet: &{SrcPort:35920 DestPort:9000 Seq:4264256054 Ack:0 Flags:40962 WindowSize:29200 Checksum:15236 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:29:07 tcp packet: &{SrcPort:35920 DestPort:9000 Seq:4264256055 Ack:474161415 Flags:32784 WindowSize:229 Checksum:12854 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:29:07 connection established
2022/05/06 23:29:07 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 140 80 28 65 154 103 254 43 98 55 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:29:07 checksumer: &{sum:412962 oddByte:33 length:39}
2022/05/06 23:29:07 ret:  412995
2022/05/06 23:29:07 ret:  19785
2022/05/06 23:29:07 ret:  19785
2022/05/06 23:29:07 boom packet injected
2022/05/06 23:29:07 tcp packet: &{SrcPort:35920 DestPort:9000 Seq:4264256055 Ack:474161415 Flags:32785 WindowSize:229 Checksum:12853 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177
2022/05/06 23:29:09 tcp packet: &{SrcPort:41264 DestPort:9000 Seq:760084716 Ack:3466105316 Flags:32784 WindowSize:229 Checksum:9392 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:29:09 tcp packet: &{SrcPort:41202 DestPort:9000 Seq:3952610569 Ack:0 Flags:40962 WindowSize:29200 Checksum:34513 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.177
2022/05/06 23:29:09 tcp packet: &{SrcPort:41202 DestPort:9000 Seq:3952610570 Ack:442775707 Flags:32784 WindowSize:229 Checksum:24573 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.177
2022/05/06 23:29:09 connection established
2022/05/06 23:29:09 calling checksumTCP: 10.244.4.19 10.244.3.177 [35 40 160 242 26 98 177 251 235 152 13 10 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/05/06 23:29:09 checksumer: &{sum:517091 oddByte:33 length:39}
2022/05/06 23:29:09 ret:  517124
2022/05/06 23:29:09 ret:  58379
2022/05/06 23:29:09 ret:  58379
2022/05/06 23:29:09 boom packet injected
2022/05/06 23:29:09 tcp packet: &{SrcPort:41202 DestPort:9000 Seq:3952610570 Ack:442775707 Flags:32785 WindowSize:229 Checksum:24572 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.177

May  6 23:29:11.706: INFO: boom-server OK: did not receive any RST packet
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:11.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-1852" for this suite.


• [SLOW TEST:76.124 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":1,"skipped":31,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:28:44.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: udp [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434
STEP: Performing setup for networking test in namespace nettest-5343
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:28:44.849: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:28:44.893: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:46.898: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:48.896: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:50.897: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:52.898: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:54.896: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:56.898: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:58.896: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:00.895: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:02.897: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:04.897: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:29:04.902: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  6 23:29:06.907: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:29:14.931: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:29:14.931: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:29:14.944: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:14.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5343" for this suite.


S [SKIPPING] [30.283 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: udp [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:15.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename netpol
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
May  6 23:29:15.136: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
May  6 23:29:15.142: INFO: starting watch
STEP: patching
STEP: updating
May  6 23:29:15.156: INFO: waiting for watch events with expected annotations
May  6 23:29:15.156: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
May  6 23:29:15.156: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:15.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-6257" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":2,"skipped":352,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":1,"skipped":438,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:28:52.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: http [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416
STEP: Performing setup for networking test in namespace nettest-9577
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:28:52.882: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:28:52.912: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:54.915: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:56.916: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:58.915: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:00.915: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:02.917: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:04.916: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:06.918: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:08.916: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:10.921: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:12.917: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:29:12.923: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  6 23:29:14.927: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:29:20.947: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:29:20.947: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:29:20.954: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:20.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9577" for this suite.


S [SKIPPING] [28.192 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: http [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:28:56.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
STEP: Performing setup for networking test in namespace nettest-3980
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:28:56.457: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:28:56.496: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:58.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:00.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:02.502: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:04.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:06.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:08.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:10.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:12.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:14.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:16.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:18.502: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:29:18.507: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:29:24.529: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:29:24.529: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:29:24.536: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:24.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3980" for this suite.


S [SKIPPING] [28.208 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:24.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112
STEP: testing: /healthz
STEP: testing: /api
STEP: testing: /apis
STEP: testing: /metrics
STEP: testing: /openapi/v2
STEP: testing: /version
STEP: testing: /logs
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:25.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1808" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":1,"skipped":300,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:08.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kube-proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
May  6 23:29:08.114: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:10.120: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:12.118: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:14.116: INFO: The status of Pod e2e-net-exec is Running (Ready = true)
STEP: Launching a server daemon on node node2 (node ip: 10.10.190.208, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
May  6 23:29:14.131: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:16.135: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:18.137: INFO: The status of Pod e2e-net-server is Running (Ready = true)
STEP: Launching a client connection on node node1 (node ip: 10.10.190.207, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
May  6 23:29:20.160: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:22.166: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:24.164: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:26.162: INFO: The status of Pod e2e-net-client is Running (Ready = true)
STEP: Checking conntrack entries for the timeout
May  6 23:29:26.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kube-proxy-1608 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 10.10.190.208 | grep -m 1 'CLOSE_WAIT.*dport=11302' '
May  6 23:29:27.003: INFO: stderr: "+ conntrack -L -f ipv4 -d 10.10.190.208\n+ grep -m 1 CLOSE_WAIT.*dport=11302\nconntrack v1.4.5 (conntrack-tools): 7 flow entries have been shown.\n"
May  6 23:29:27.003: INFO: stdout: "tcp      6 3596 CLOSE_WAIT src=10.244.3.203 dst=10.10.190.208 sport=41722 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=62647 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1\n"
May  6 23:29:27.003: INFO: conntrack entry for node 10.10.190.208 and port 11302:  tcp      6 3596 CLOSE_WAIT src=10.244.3.203 dst=10.10.190.208 sport=41722 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=62647 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1

[AfterEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:27.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kube-proxy-1608" for this suite.


• [SLOW TEST:18.940 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":2,"skipped":745,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:27.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster [Provider:GCE]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68
May  6 23:29:27.463: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:27.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6501" for this suite.


S [SKIPPING] [0.035 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster [Provider:GCE] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:27.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
STEP: creating a service with no endpoints
STEP: creating execpod-noendpoints on node node1
May  6 23:29:27.762: INFO: Creating new exec pod
May  6 23:29:33.781: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node node1
May  6 23:29:33.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-43 exec execpod-noendpointspthv5 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
May  6 23:29:35.052: INFO: rc: 1
May  6 23:29:35.052: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-43 exec execpod-noendpointspthv5 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:35.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-43" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:7.332 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":3,"skipped":1099,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:35.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91
May  6 23:29:35.289: INFO: (0) /api/v1/nodes/node1/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
May  6 23:29:35.611: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:35.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-1798" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should handle updates to ExternalTrafficPolicy field [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:01.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
STEP: Performing setup for networking test in namespace nettest-7287
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:29:01.534: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:29:01.565: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:03.569: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:05.569: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:07.571: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:09.569: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:11.569: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:13.570: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:15.570: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:17.571: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:19.569: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:21.570: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:23.569: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:29:23.574: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  6 23:29:25.577: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:29:41.601: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:29:41.601: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:29:41.608: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:41.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7287" for this suite.


S [SKIPPING] [40.199 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:27:57.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
STEP: creating service-disabled in namespace services-8247
STEP: creating service service-proxy-disabled in namespace services-8247
STEP: creating replication controller service-proxy-disabled in namespace services-8247
I0506 23:27:57.896990      31 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-8247, replica count: 3
I0506 23:28:00.948820      31 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:28:03.949264      31 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:28:06.950289      31 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:28:09.950678      31 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:28:12.951495      31 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-8247
STEP: creating service service-proxy-toggled in namespace services-8247
STEP: creating replication controller service-proxy-toggled in namespace services-8247
I0506 23:28:12.966054      31 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-8247, replica count: 3
I0506 23:28:16.017013      31 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:28:19.017207      31 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:28:22.017776      31 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
May  6 23:28:22.019: INFO: Creating new host exec pod
May  6 23:28:22.033: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:24.037: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:26.038: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:28.038: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:30.038: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:32.041: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May  6 23:28:32.041: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May  6 23:28:44.060: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.48:80 2>&1 || true; echo; done" in pod services-8247/verify-service-up-host-exec-pod
May  6 23:28:44.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8247 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.48:80 2>&1 || true; echo; done'
May  6 23:28:44.544: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n"
May  6 23:28:44.544: INFO: stdout: "service-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\n"
May  6 23:28:44.545: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.48:80 2>&1 || true; echo; done" in pod services-8247/verify-service-up-exec-pod-rlzdq
May  6 23:28:44.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8247 exec verify-service-up-exec-pod-rlzdq -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.48:80 2>&1 || true; echo; done'
May  6 23:28:46.315: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n"
May  6 23:28:46.315: INFO: stdout: "service-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-8247
STEP: Deleting pod verify-service-up-exec-pod-rlzdq in namespace services-8247
STEP: verifying service-disabled is not up
May  6 23:28:46.332: INFO: Creating new host exec pod
May  6 23:28:46.347: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:48.351: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:50.351: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:52.351: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May  6 23:28:52.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8247 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.37.21:80 && echo service-down-failed'
May  6 23:28:54.721: INFO: rc: 28
May  6 23:28:54.721: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.37.21:80 && echo service-down-failed" in pod services-8247/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8247 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.37.21:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.37.21:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-8247
STEP: adding service-proxy-name label
STEP: verifying service is not up
May  6 23:28:54.753: INFO: Creating new host exec pod
May  6 23:28:54.767: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:56.771: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:28:58.770: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:00.771: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:02.773: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:04.796: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:06.771: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:08.769: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:10.770: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:12.772: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May  6 23:29:12.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8247 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.59.48:80 && echo service-down-failed'
May  6 23:29:15.211: INFO: rc: 28
May  6 23:29:15.211: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.59.48:80 && echo service-down-failed" in pod services-8247/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8247 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.59.48:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.59.48:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-8247
STEP: removing service-proxy-name annotation
STEP: verifying service is up
May  6 23:29:15.315: INFO: Creating new host exec pod
May  6 23:29:15.327: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:17.333: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:19.331: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:21.331: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May  6 23:29:21.332: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May  6 23:29:33.348: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.48:80 2>&1 || true; echo; done" in pod services-8247/verify-service-up-host-exec-pod
May  6 23:29:33.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8247 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.48:80 2>&1 || true; echo; done'
May  6 23:29:33.741: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n"
May  6 23:29:33.742: INFO: stdout: "service-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\n"
May  6 23:29:33.742: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.48:80 2>&1 || true; echo; done" in pod services-8247/verify-service-up-exec-pod-v8ggg
May  6 23:29:33.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8247 exec verify-service-up-exec-pod-v8ggg -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.48:80 2>&1 || true; echo; done'
May  6 23:29:34.396: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.48:80\n+ echo\n"
May  6 23:29:34.397: INFO: stdout: "service-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\nservice-proxy-toggled-tvtgr\nservice-proxy-toggled-kgwrm\nservice-proxy-toggled-clm98\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-8247
STEP: Deleting pod verify-service-up-exec-pod-v8ggg in namespace services-8247
STEP: verifying service-disabled is still not up
May  6 23:29:34.413: INFO: Creating new host exec pod
May  6 23:29:34.429: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:36.433: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:38.437: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:40.432: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:42.433: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May  6 23:29:42.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8247 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.37.21:80 && echo service-down-failed'
May  6 23:29:44.686: INFO: rc: 28
May  6 23:29:44.686: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.37.21:80 && echo service-down-failed" in pod services-8247/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8247 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.37.21:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.37.21:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-8247
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:44.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8247" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:106.835 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":2,"skipped":515,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:45.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
May  6 23:29:45.218: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:45.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-7042" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work for type=NodePort [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:01.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-3683
STEP: creating a client pod for probing the service svc-udp
May  6 23:29:01.064: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:03.068: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:05.069: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:07.068: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:09.067: INFO: The status of Pod pod-client is Running (Ready = true)
May  6 23:29:09.082: INFO: Pod client logs: Fri May  6 23:29:06 UTC 2022
Fri May  6 23:29:06 UTC 2022 Try: 1

Fri May  6 23:29:06 UTC 2022 Try: 2

Fri May  6 23:29:06 UTC 2022 Try: 3

Fri May  6 23:29:06 UTC 2022 Try: 4

Fri May  6 23:29:06 UTC 2022 Try: 5

Fri May  6 23:29:06 UTC 2022 Try: 6

Fri May  6 23:29:06 UTC 2022 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
May  6 23:29:09.229: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:11.236: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:13.233: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:15.232: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-3683 to expose endpoints map[pod-server-1:[80]]
May  6 23:29:15.241: INFO: successfully validated that service svc-udp in namespace conntrack-3683 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
STEP: creating a second backend pod pod-server-2 for the service svc-udp
May  6 23:29:25.270: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:27.274: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:29.274: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:31.273: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:33.275: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:35.274: INFO: The status of Pod pod-server-2 is Running (Ready = true)
May  6 23:29:35.276: INFO: Cleaning up pod-server-1 pod
May  6 23:29:35.283: INFO: Waiting for pod pod-server-1 to disappear
May  6 23:29:35.286: INFO: Pod pod-server-1 no longer exists
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-3683 to expose endpoints map[pod-server-2:[80]]
May  6 23:29:35.293: INFO: successfully validated that service svc-udp in namespace conntrack-3683 exposes endpoints map[pod-server-2:[80]]
STEP: checking client pod connected to the backend 2 on Node IP 10.10.190.208
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:45.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-3683" for this suite.


• [SLOW TEST:44.303 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":1,"skipped":419,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:15.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-99966b32-1a5f-41d7-959e-7bd0e638a744]
STEP: Verifying pods for RC slow-terminating-unready-pod
May  6 23:29:15.420: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
May  6 23:29:21.461: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-fsrqg]: "NOW: 2022-05-06 23:29:21.460623106 +0000 UTC m=+3.232016053", 1 of 1 required successes so far
STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-9583.svc.cluster.local
May  6 23:29:21.461: INFO: Creating new exec pod
May  6 23:29:35.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9583 exec execpod-gk9zz -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9583.svc.cluster.local:80/'
May  6 23:29:35.788: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-9583.svc.cluster.local:80/\n"
May  6 23:29:35.788: INFO: stdout: "NOW: 2022-05-06 23:29:35.779882596 +0000 UTC m=+17.551275597"
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-9583 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
May  6 23:29:40.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9583 exec execpod-gk9zz -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9583.svc.cluster.local:80/; test "$?" -ne "0"'
May  6 23:29:41.105: INFO: rc: 1
May  6 23:29:41.105: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: NOW: 2022-05-06 23:29:41.095628571 +0000 UTC m=+22.867021518, err error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9583 exec execpod-gk9zz -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9583.svc.cluster.local:80/; test "$?" -ne "0":
Command stdout:
NOW: 2022-05-06 23:29:41.095628571 +0000 UTC m=+22.867021518
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-9583.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
May  6 23:29:43.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9583 exec execpod-gk9zz -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9583.svc.cluster.local:80/; test "$?" -ne "0"'
May  6 23:29:44.365: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-9583.svc.cluster.local:80/\n+ test 7 -ne 0\n"
May  6 23:29:44.365: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
May  6 23:29:44.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9583 exec execpod-gk9zz -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9583.svc.cluster.local:80/'
May  6 23:29:45.653: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-9583.svc.cluster.local:80/\n"
May  6 23:29:45.653: INFO: stdout: "NOW: 2022-05-06 23:29:45.638327473 +0000 UTC m=+27.409720420"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-9583
STEP: deleting service tolerate-unready in namespace services-9583
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:45.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9583" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:30.336 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":3,"skipped":447,"failed":0}

SSSS
------------------------------
May  6 23:29:45.706: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:09.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
STEP: Performing setup for networking test in namespace nettest-1286
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:29:09.149: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:29:09.379: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:11.381: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:13.384: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:15.382: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:17.384: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:19.383: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:21.383: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:23.384: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:25.382: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:27.383: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:29.382: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:31.384: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:29:31.388: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  6 23:29:33.392: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  6 23:29:35.392: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:29:43.427: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:29:43.428: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
May  6 23:29:43.449: INFO: Service node-port-service in namespace nettest-1286 found.
May  6 23:29:43.464: INFO: Service session-affinity-service in namespace nettest-1286 found.
STEP: Waiting for NodePort service to expose endpoint
May  6 23:29:44.467: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
May  6 23:29:45.470: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: checking kube-proxy URLs
STEP: Getting kube-proxy self URL /healthz
May  6 23:29:45.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-1286 exec host-test-container-pod -- /bin/sh -x -c curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz'
May  6 23:29:45.737: INFO: stderr: "+ curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz\n"
May  6 23:29:45.737: INFO: stdout: "HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nX-Content-Type-Options: nosniff\r\nDate: Fri, 06 May 2022 23:29:45 GMT\r\nContent-Length: 151\r\n\r\n{\"lastUpdated\": \"2022-05-06 23:29:45.72772946 +0000 UTC m=+11781.364986577\",\"currentTime\": \"2022-05-06 23:29:45.72772946 +0000 UTC m=+11781.364986577\"}"
STEP: Getting kube-proxy self URL /healthz
May  6 23:29:45.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-1286 exec host-test-container-pod -- /bin/sh -x -c curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz'
May  6 23:29:46.025: INFO: stderr: "+ curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz\n"
May  6 23:29:46.025: INFO: stdout: "HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nX-Content-Type-Options: nosniff\r\nDate: Fri, 06 May 2022 23:29:46 GMT\r\nContent-Length: 153\r\n\r\n{\"lastUpdated\": \"2022-05-06 23:29:46.016056186 +0000 UTC m=+11781.653313301\",\"currentTime\": \"2022-05-06 23:29:46.016056186 +0000 UTC m=+11781.653313301\"}"
STEP: Checking status code against http://localhost:10249/proxyMode
May  6 23:29:46.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-1286 exec host-test-container-pod -- /bin/sh -x -c curl -o /dev/null -i -q -s -w %{http_code} --connect-timeout 1 http://localhost:10249/proxyMode'
May  6 23:29:46.699: INFO: stderr: "+ curl -o /dev/null -i -q -s -w '%{http_code}' --connect-timeout 1 http://localhost:10249/proxyMode\n"
May  6 23:29:46.699: INFO: stdout: "200"
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:46.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1286" for this suite.


• [SLOW TEST:37.666 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:21.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
STEP: Performing setup for networking test in namespace nettest-5626
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:29:21.198: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:29:21.229: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:23.232: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:25.231: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:27.233: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:29.233: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:31.233: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:33.234: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:35.231: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:37.234: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:39.233: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:41.233: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:43.234: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:29:43.239: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  6 23:29:45.242: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:29:51.263: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:29:51.263: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:29:51.270: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:51.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5626" for this suite.


S [SKIPPING] [30.196 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
May  6 23:29:51.282: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:45.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
STEP: creating service nodeport-reuse with type NodePort in namespace services-9210
STEP: deleting original service nodeport-reuse
May  6 23:29:45.531: INFO: Creating new host exec pod
May  6 23:29:45.544: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:47.547: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:49.548: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:51.548: INFO: The status of Pod hostexec is Running (Ready = true)
May  6 23:29:51.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9210 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :31701' | tail -n +2 | grep LISTEN'
May  6 23:29:51.877: INFO: stderr: "+ tail -n +2\n+ ss -ant46 'sport = :31701'\n+ grep LISTEN\n"
May  6 23:29:51.877: INFO: stdout: ""
STEP: creating service nodeport-reuse with same NodePort 31701
STEP: deleting service nodeport-reuse in namespace services-9210
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:29:51.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9210" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:6.417 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":2,"skipped":508,"failed":0}
May  6 23:29:51.906: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:41.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
STEP: creating service externalip-test with type=clusterIP in namespace services-9356
STEP: creating replication controller externalip-test in namespace services-9356
I0506 23:29:42.009009      24 runners.go:190] Created replication controller with name: externalip-test, namespace: services-9356, replica count: 2
I0506 23:29:45.061097      24 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:29:48.061987      24 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  6 23:29:48.062: INFO: Creating new exec pod
May  6 23:29:57.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9356 exec execpodhqmmb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
May  6 23:29:57.610: INFO: stderr: "+ nc -v -t -w 2 externalip-test 80\n+ echo hostName\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
May  6 23:29:57.610: INFO: stdout: ""
May  6 23:29:58.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9356 exec execpodhqmmb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
May  6 23:29:58.890: INFO: stderr: "+ nc -v -t -w 2 externalip-test 80\n+ echo hostName\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
May  6 23:29:58.890: INFO: stdout: ""
May  6 23:29:59.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9356 exec execpodhqmmb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
May  6 23:30:00.164: INFO: stderr: "+ nc -v -t -w 2 externalip-test 80\n+ echo hostName\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
May  6 23:30:00.164: INFO: stdout: "externalip-test-2fg6q"
May  6 23:30:00.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9356 exec execpodhqmmb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.37.137 80'
May  6 23:30:00.430: INFO: stderr: "+ nc -v -t -w 2 10.233.37.137 80\nConnection to 10.233.37.137 80 port [tcp/http] succeeded!\n+ echo hostName\n"
May  6 23:30:00.430: INFO: stdout: "externalip-test-dkpkz"
May  6 23:30:00.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9356 exec execpodhqmmb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
May  6 23:30:00.786: INFO: stderr: "+ nc -v -t -w 2 203.0.113.250 80\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n+ echo hostName\n"
May  6 23:30:00.786: INFO: stdout: "externalip-test-2fg6q"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:30:00.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9356" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:18.821 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":3,"skipped":708,"failed":0}
May  6 23:30:00.797: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:35.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256
STEP: Performing setup for networking test in namespace nettest-7553
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  6 23:29:35.776: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:29:35.809: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:37.813: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:39.813: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:41.814: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:43.813: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:45.814: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:47.814: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:49.813: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:51.813: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:53.813: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:55.813: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  6 23:29:57.813: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  6 23:29:57.821: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  6 23:30:01.844: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
May  6 23:30:01.844: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  6 23:30:01.852: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:30:01.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7553" for this suite.


S [SKIPPING] [26.215 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
May  6 23:30:01.864: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:09.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
STEP: creating service-headless in namespace services-6976
STEP: creating service service-headless in namespace services-6976
STEP: creating replication controller service-headless in namespace services-6976
I0506 23:29:09.500995      33 runners.go:190] Created replication controller with name: service-headless, namespace: services-6976, replica count: 3
I0506 23:29:12.552665      33 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:29:15.553834      33 runners.go:190] service-headless Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:29:18.555572      33 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-6976
STEP: creating service service-headless-toggled in namespace services-6976
STEP: creating replication controller service-headless-toggled in namespace services-6976
I0506 23:29:18.567483      33 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-6976, replica count: 3
I0506 23:29:21.618702      33 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:29:24.620201      33 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:29:27.621843      33 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:29:30.622953      33 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
May  6 23:29:30.625: INFO: Creating new host exec pod
May  6 23:29:30.639: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:32.642: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May  6 23:29:32.642: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May  6 23:29:38.660: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.48.128:80 2>&1 || true; echo; done" in pod services-6976/verify-service-up-host-exec-pod
May  6 23:29:38.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6976 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.48.128:80 2>&1 || true; echo; done'
May  6 23:29:39.021: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n"
May  6 23:29:39.021: INFO: stdout: "service-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\n"
May  6 23:29:39.022: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.48.128:80 2>&1 || true; echo; done" in pod services-6976/verify-service-up-exec-pod-fvvln
May  6 23:29:39.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6976 exec verify-service-up-exec-pod-fvvln -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.48.128:80 2>&1 || true; echo; done'
May  6 23:29:39.400: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n"
May  6 23:29:39.401: INFO: stdout: "service-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-6976
STEP: Deleting pod verify-service-up-exec-pod-fvvln in namespace services-6976
STEP: verifying service-headless is not up
May  6 23:29:39.417: INFO: Creating new host exec pod
May  6 23:29:39.428: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:41.432: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:43.432: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May  6 23:29:43.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6976 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.9.224:80 && echo service-down-failed'
May  6 23:29:46.273: INFO: rc: 28
May  6 23:29:46.274: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.9.224:80 && echo service-down-failed" in pod services-6976/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6976 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.9.224:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.9.224:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-6976
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
May  6 23:29:46.290: INFO: Creating new host exec pod
May  6 23:29:46.301: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:48.306: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:50.305: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:52.306: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:54.305: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:56.305: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:58.306: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:00.306: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:02.307: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May  6 23:30:02.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6976 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.48.128:80 && echo service-down-failed'
May  6 23:30:04.560: INFO: rc: 28
May  6 23:30:04.560: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.48.128:80 && echo service-down-failed" in pod services-6976/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6976 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.48.128:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.48.128:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-6976
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
May  6 23:30:04.576: INFO: Creating new host exec pod
May  6 23:30:04.590: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:06.596: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:08.593: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May  6 23:30:08.593: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May  6 23:30:12.613: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.48.128:80 2>&1 || true; echo; done" in pod services-6976/verify-service-up-host-exec-pod
May  6 23:30:12.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6976 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.48.128:80 2>&1 || true; echo; done'
May  6 23:30:13.002: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n"
May  6 23:30:13.003: INFO: stdout: "service-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\n"
May  6 23:30:13.003: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.48.128:80 2>&1 || true; echo; done" in pod services-6976/verify-service-up-exec-pod-qv92h
May  6 23:30:13.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6976 exec verify-service-up-exec-pod-qv92h -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.48.128:80 2>&1 || true; echo; done'
May  6 23:30:13.424: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.48.128:80\n+ echo\n"
May  6 23:30:13.424: INFO: stdout: "service-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-xzmn9\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-28sjx\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\nservice-headless-toggled-25qb7\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-6976
STEP: Deleting pod verify-service-up-exec-pod-qv92h in namespace services-6976
STEP: verifying service-headless is still not up
May  6 23:30:13.440: INFO: Creating new host exec pod
May  6 23:30:13.452: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:15.455: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May  6 23:30:15.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6976 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.9.224:80 && echo service-down-failed'
May  6 23:30:17.770: INFO: rc: 28
May  6 23:30:17.770: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.9.224:80 && echo service-down-failed" in pod services-6976/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6976 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.9.224:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.9.224:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-6976
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:30:17.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6976" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:68.320 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":4,"skipped":340,"failed":0}
May  6 23:30:17.792: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:11.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
STEP: creating a UDP service svc-udp with type=NodePort in conntrack-1783
STEP: creating a client pod for probing the service svc-udp
May  6 23:29:11.868: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:13.871: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:15.873: INFO: The status of Pod pod-client is Running (Ready = true)
May  6 23:29:15.880: INFO: Pod client logs: Fri May  6 23:29:15 UTC 2022
Fri May  6 23:29:15 UTC 2022 Try: 1

Fri May  6 23:29:15 UTC 2022 Try: 2

Fri May  6 23:29:15 UTC 2022 Try: 3

Fri May  6 23:29:15 UTC 2022 Try: 4

Fri May  6 23:29:15 UTC 2022 Try: 5

Fri May  6 23:29:15 UTC 2022 Try: 6

Fri May  6 23:29:15 UTC 2022 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
May  6 23:29:15.894: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:17.898: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:19.899: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:21.898: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:23.897: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-1783 to expose endpoints map[pod-server-1:[80]]
May  6 23:29:23.909: INFO: successfully validated that service svc-udp in namespace conntrack-1783 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
May  6 23:30:23.932: INFO: Pod client logs: Fri May  6 23:29:15 UTC 2022
Fri May  6 23:29:15 UTC 2022 Try: 1

Fri May  6 23:29:15 UTC 2022 Try: 2

Fri May  6 23:29:15 UTC 2022 Try: 3

Fri May  6 23:29:15 UTC 2022 Try: 4

Fri May  6 23:29:15 UTC 2022 Try: 5

Fri May  6 23:29:15 UTC 2022 Try: 6

Fri May  6 23:29:15 UTC 2022 Try: 7

Fri May  6 23:29:20 UTC 2022 Try: 8

Fri May  6 23:29:20 UTC 2022 Try: 9

Fri May  6 23:29:20 UTC 2022 Try: 10

Fri May  6 23:29:20 UTC 2022 Try: 11

Fri May  6 23:29:20 UTC 2022 Try: 12

Fri May  6 23:29:20 UTC 2022 Try: 13

Fri May  6 23:29:25 UTC 2022 Try: 14

Fri May  6 23:29:25 UTC 2022 Try: 15

Fri May  6 23:29:25 UTC 2022 Try: 16

Fri May  6 23:29:25 UTC 2022 Try: 17

Fri May  6 23:29:25 UTC 2022 Try: 18

Fri May  6 23:29:25 UTC 2022 Try: 19

Fri May  6 23:29:30 UTC 2022 Try: 20

Fri May  6 23:29:30 UTC 2022 Try: 21

Fri May  6 23:29:30 UTC 2022 Try: 22

Fri May  6 23:29:30 UTC 2022 Try: 23

Fri May  6 23:29:30 UTC 2022 Try: 24

Fri May  6 23:29:30 UTC 2022 Try: 25

Fri May  6 23:29:35 UTC 2022 Try: 26

Fri May  6 23:29:35 UTC 2022 Try: 27

Fri May  6 23:29:35 UTC 2022 Try: 28

Fri May  6 23:29:35 UTC 2022 Try: 29

Fri May  6 23:29:35 UTC 2022 Try: 30

Fri May  6 23:29:35 UTC 2022 Try: 31

Fri May  6 23:29:40 UTC 2022 Try: 32

Fri May  6 23:29:40 UTC 2022 Try: 33

Fri May  6 23:29:40 UTC 2022 Try: 34

Fri May  6 23:29:40 UTC 2022 Try: 35

Fri May  6 23:29:40 UTC 2022 Try: 36

Fri May  6 23:29:40 UTC 2022 Try: 37

Fri May  6 23:29:45 UTC 2022 Try: 38

Fri May  6 23:29:45 UTC 2022 Try: 39

Fri May  6 23:29:45 UTC 2022 Try: 40

Fri May  6 23:29:45 UTC 2022 Try: 41

Fri May  6 23:29:45 UTC 2022 Try: 42

Fri May  6 23:29:45 UTC 2022 Try: 43

Fri May  6 23:29:50 UTC 2022 Try: 44

Fri May  6 23:29:50 UTC 2022 Try: 45

Fri May  6 23:29:50 UTC 2022 Try: 46

Fri May  6 23:29:50 UTC 2022 Try: 47

Fri May  6 23:29:50 UTC 2022 Try: 48

Fri May  6 23:29:50 UTC 2022 Try: 49

Fri May  6 23:29:55 UTC 2022 Try: 50

Fri May  6 23:29:55 UTC 2022 Try: 51

Fri May  6 23:29:55 UTC 2022 Try: 52

Fri May  6 23:29:55 UTC 2022 Try: 53

Fri May  6 23:29:55 UTC 2022 Try: 54

Fri May  6 23:29:55 UTC 2022 Try: 55

Fri May  6 23:30:00 UTC 2022 Try: 56

Fri May  6 23:30:00 UTC 2022 Try: 57

Fri May  6 23:30:00 UTC 2022 Try: 58

Fri May  6 23:30:00 UTC 2022 Try: 59

Fri May  6 23:30:00 UTC 2022 Try: 60

Fri May  6 23:30:00 UTC 2022 Try: 61

Fri May  6 23:30:05 UTC 2022 Try: 62

Fri May  6 23:30:05 UTC 2022 Try: 63

Fri May  6 23:30:05 UTC 2022 Try: 64

Fri May  6 23:30:05 UTC 2022 Try: 65

Fri May  6 23:30:05 UTC 2022 Try: 66

Fri May  6 23:30:05 UTC 2022 Try: 67

Fri May  6 23:30:10 UTC 2022 Try: 68

Fri May  6 23:30:10 UTC 2022 Try: 69

Fri May  6 23:30:10 UTC 2022 Try: 70

Fri May  6 23:30:10 UTC 2022 Try: 71

Fri May  6 23:30:10 UTC 2022 Try: 72

Fri May  6 23:30:10 UTC 2022 Try: 73

Fri May  6 23:30:15 UTC 2022 Try: 74

Fri May  6 23:30:15 UTC 2022 Try: 75

Fri May  6 23:30:15 UTC 2022 Try: 76

Fri May  6 23:30:15 UTC 2022 Try: 77

Fri May  6 23:30:15 UTC 2022 Try: 78

Fri May  6 23:30:15 UTC 2022 Try: 79

Fri May  6 23:30:20 UTC 2022 Try: 80

Fri May  6 23:30:20 UTC 2022 Try: 81

Fri May  6 23:30:20 UTC 2022 Try: 82

Fri May  6 23:30:20 UTC 2022 Try: 83

Fri May  6 23:30:20 UTC 2022 Try: 84

Fri May  6 23:30:20 UTC 2022 Try: 85

May  6 23:30:23.933: FAIL: Failed to connect to backend 1

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001742780)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001742780)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001742780, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "conntrack-1783".
STEP: Found 8 events.
May  6 23:30:23.937: INFO: At 2022-05-06 23:29:14 +0000 UTC - event for pod-client: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May  6 23:30:23.937: INFO: At 2022-05-06 23:29:14 +0000 UTC - event for pod-client: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 522.548427ms
May  6 23:30:23.937: INFO: At 2022-05-06 23:29:15 +0000 UTC - event for pod-client: {kubelet node1} Created: Created container pod-client
May  6 23:30:23.937: INFO: At 2022-05-06 23:29:15 +0000 UTC - event for pod-client: {kubelet node1} Started: Started container pod-client
May  6 23:30:23.937: INFO: At 2022-05-06 23:29:17 +0000 UTC - event for pod-server-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May  6 23:30:23.937: INFO: At 2022-05-06 23:29:18 +0000 UTC - event for pod-server-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 309.069873ms
May  6 23:30:23.937: INFO: At 2022-05-06 23:29:18 +0000 UTC - event for pod-server-1: {kubelet node2} Created: Created container agnhost-container
May  6 23:30:23.937: INFO: At 2022-05-06 23:29:18 +0000 UTC - event for pod-server-1: {kubelet node2} Started: Started container agnhost-container
May  6 23:30:23.940: INFO: POD           NODE   PHASE    GRACE  CONDITIONS
May  6 23:30:23.940: INFO: pod-client    node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:11 +0000 UTC  }]
May  6 23:30:23.940: INFO: pod-server-1  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:15 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:15 +0000 UTC  }]
May  6 23:30:23.940: INFO: 
May  6 23:30:23.944: INFO: 
Logging node info for node master1
May  6 23:30:23.946: INFO: Node Info: &Node{ObjectMeta:{master1    3ea7d7b2-d1dd-4f70-bd03-4c3ec5a8e02c 75931 0 2022-05-06 20:07:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-06 20:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:15:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 23:30:15 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 23:30:15 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 23:30:15 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 23:30:15 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fddab730508c43d4ba9efb575f362bc6,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8708efb4-3ff3-4f9b-a116-eb7702a71201,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May  6 23:30:23.947: INFO: 
Logging kubelet events for node master1
May  6 23:30:23.949: INFO: 
Logging pods the kubelet thinks is on node master1
May  6 23:30:23.975: INFO: kube-apiserver-master1 started at 2022-05-06 20:08:39 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:23.975: INFO: 	Container kube-apiserver ready: true, restart count 0
May  6 23:30:23.975: INFO: kube-controller-manager-master1 started at 2022-05-06 20:16:36 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:23.975: INFO: 	Container kube-controller-manager ready: true, restart count 2
May  6 23:30:23.975: INFO: kube-multus-ds-amd64-pdpj8 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:23.975: INFO: 	Container kube-multus ready: true, restart count 1
May  6 23:30:23.975: INFO: node-exporter-6wcwp started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded)
May  6 23:30:23.975: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May  6 23:30:23.975: INFO: 	Container node-exporter ready: true, restart count 0
May  6 23:30:23.975: INFO: container-registry-65d7c44b96-5pp99 started at 2022-05-06 20:14:46 +0000 UTC (0+2 container statuses recorded)
May  6 23:30:23.975: INFO: 	Container docker-registry ready: true, restart count 0
May  6 23:30:23.975: INFO: 	Container nginx ready: true, restart count 0
May  6 23:30:23.975: INFO: kube-scheduler-master1 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:23.975: INFO: 	Container kube-scheduler ready: true, restart count 0
May  6 23:30:23.975: INFO: kube-proxy-bnqzh started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:23.975: INFO: 	Container kube-proxy ready: true, restart count 2
May  6 23:30:23.975: INFO: kube-flannel-dz2ld started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded)
May  6 23:30:23.975: INFO: 	Init container install-cni ready: true, restart count 0
May  6 23:30:23.975: INFO: 	Container kube-flannel ready: true, restart count 1
May  6 23:30:23.975: INFO: coredns-8474476ff8-jtj8t started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:23.975: INFO: 	Container coredns ready: true, restart count 1
May  6 23:30:24.078: INFO: 
Latency metrics for node master1
May  6 23:30:24.078: INFO: 
Logging node info for node master2
May  6 23:30:24.080: INFO: Node Info: &Node{ObjectMeta:{master2    0aed38bc-6408-4920-b364-7d6b9bff7102 75979 0 2022-05-06 20:08:00 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-06 20:08:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 23:30:21 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 23:30:21 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 23:30:21 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 23:30:21 +0000 UTC,LastTransitionTime:2022-05-06 20:13:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94f6743f72cc461cb731cffce21ae835,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:340a40ae-5d7c-47da-a6f4-a4b5b64d56f7,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May  6 23:30:24.081: INFO: 
Logging kubelet events for node master2
May  6 23:30:24.083: INFO: 
Logging pods the kubelet thinks is on node master2
May  6 23:30:24.099: INFO: kube-multus-ds-amd64-gd6zv started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.099: INFO: 	Container kube-multus ready: true, restart count 1
May  6 23:30:24.099: INFO: kube-scheduler-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.099: INFO: 	Container kube-scheduler ready: true, restart count 2
May  6 23:30:24.099: INFO: kube-apiserver-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.099: INFO: 	Container kube-apiserver ready: true, restart count 0
May  6 23:30:24.099: INFO: kube-flannel-4kjc4 started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded)
May  6 23:30:24.099: INFO: 	Init container install-cni ready: true, restart count 0
May  6 23:30:24.099: INFO: 	Container kube-flannel ready: true, restart count 1
May  6 23:30:24.099: INFO: node-exporter-b26kc started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded)
May  6 23:30:24.099: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May  6 23:30:24.099: INFO: 	Container node-exporter ready: true, restart count 0
May  6 23:30:24.099: INFO: kube-controller-manager-master2 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.099: INFO: 	Container kube-controller-manager ready: true, restart count 1
May  6 23:30:24.099: INFO: kube-proxy-tr8m9 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.099: INFO: 	Container kube-proxy ready: true, restart count 2
May  6 23:30:24.099: INFO: dns-autoscaler-7df78bfcfb-srh4b started at 2022-05-06 20:10:54 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.099: INFO: 	Container autoscaler ready: true, restart count 1
May  6 23:30:24.187: INFO: 
Latency metrics for node master2
May  6 23:30:24.187: INFO: 
Logging node info for node master3
May  6 23:30:24.189: INFO: Node Info: &Node{ObjectMeta:{master3    1cc41c26-3708-4912-8ff5-aa83b70d989e 75957 0 2022-05-06 20:08:11 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-06 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-06 20:09:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:17:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-05-06 20:18:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:10 +0000 UTC,LastTransitionTime:2022-05-06 20:13:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 23:30:18 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 23:30:18 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 23:30:18 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 23:30:18 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:045e9ce9dfcd42ef970e1ed3a55941b3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:ee1f3fa6-4f8f-4726-91f5-b87ee8838a88,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May  6 23:30:24.190: INFO: 
Logging kubelet events for node master3
May  6 23:30:24.192: INFO: 
Logging pods the kubelet thinks is on node master3
May  6 23:30:24.206: INFO: kube-multus-ds-amd64-mtj2t started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.206: INFO: 	Container kube-multus ready: true, restart count 1
May  6 23:30:24.206: INFO: coredns-8474476ff8-t4bcd started at 2022-05-06 20:10:52 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.206: INFO: 	Container coredns ready: true, restart count 1
May  6 23:30:24.206: INFO: node-feature-discovery-controller-cff799f9f-rwzfc started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.206: INFO: 	Container nfd-controller ready: true, restart count 0
May  6 23:30:24.206: INFO: kube-apiserver-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.206: INFO: 	Container kube-apiserver ready: true, restart count 0
May  6 23:30:24.206: INFO: kube-scheduler-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.206: INFO: 	Container kube-scheduler ready: true, restart count 2
May  6 23:30:24.206: INFO: kube-proxy-m9tv5 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.206: INFO: 	Container kube-proxy ready: true, restart count 2
May  6 23:30:24.206: INFO: kube-flannel-2twpc started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded)
May  6 23:30:24.206: INFO: 	Init container install-cni ready: true, restart count 2
May  6 23:30:24.206: INFO: 	Container kube-flannel ready: true, restart count 1
May  6 23:30:24.206: INFO: node-exporter-mcj6x started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded)
May  6 23:30:24.206: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May  6 23:30:24.206: INFO: 	Container node-exporter ready: true, restart count 0
May  6 23:30:24.206: INFO: kube-controller-manager-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.206: INFO: 	Container kube-controller-manager ready: true, restart count 3
May  6 23:30:24.299: INFO: 
Latency metrics for node master3
May  6 23:30:24.299: INFO: 
Logging node info for node node1
May  6 23:30:24.302: INFO: Node Info: &Node{ObjectMeta:{node1    851b0a69-efd4-49b7-98ef-f0cfe2d311c6 75958 0 2022-05-06 20:09:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 22:27:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:24 +0000 UTC,LastTransitionTime:2022-05-06 20:13:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 23:30:19 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 23:30:19 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 23:30:19 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 23:30:19 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bae6af61b07b462daf118753f89950b1,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:871de03d-49a7-4910-8d15-63422e0e629a,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:859ab6768a6f26a79bc42b231664111317d095a4f04e4b6fe79ce37b3d199097 nginx:latest],SizeBytes:141522124,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May  6 23:30:24.303: INFO: 
Logging kubelet events for node node1
May  6 23:30:24.305: INFO: 
Logging pods the kubelet thinks is on node node1
May  6 23:30:24.321: INFO: nginx-proxy-node1 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container nginx-proxy ready: true, restart count 2
May  6 23:30:24.321: INFO: kube-proxy-xc75d started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container kube-proxy ready: true, restart count 2
May  6 23:30:24.321: INFO: kube-flannel-ph67x started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded)
May  6 23:30:24.321: INFO: 	Init container install-cni ready: true, restart count 2
May  6 23:30:24.321: INFO: 	Container kube-flannel ready: true, restart count 3
May  6 23:30:24.321: INFO: node-feature-discovery-worker-fbf8d started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container nfd-worker ready: true, restart count 0
May  6 23:30:24.321: INFO: up-down-1-l2jz2 started at 2022-05-06 23:29:25 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container up-down-1 ready: false, restart count 0
May  6 23:30:24.321: INFO: kube-multus-ds-amd64-2mv45 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container kube-multus ready: true, restart count 1
May  6 23:30:24.321: INFO: node-exporter-hqs4s started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May  6 23:30:24.321: INFO: 	Container node-exporter ready: true, restart count 0
May  6 23:30:24.321: INFO: collectd-wq9cz started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container collectd ready: true, restart count 0
May  6 23:30:24.321: INFO: 	Container collectd-exporter ready: true, restart count 0
May  6 23:30:24.321: INFO: 	Container rbac-proxy ready: true, restart count 0
May  6 23:30:24.321: INFO: execpodlzmnv started at 2022-05-06 23:29:55 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container agnhost-container ready: true, restart count 0
May  6 23:30:24.321: INFO: up-down-2-kdf4x started at 2022-05-06 23:29:40 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container up-down-2 ready: true, restart count 0
May  6 23:30:24.321: INFO: service-headless-bxjp7 started at 2022-05-06 23:29:09 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container service-headless ready: true, restart count 0
May  6 23:30:24.321: INFO: prometheus-operator-585ccfb458-vrrfv started at 2022-05-06 20:23:12 +0000 UTC (0+2 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May  6 23:30:24.321: INFO: 	Container prometheus-operator ready: true, restart count 0
May  6 23:30:24.321: INFO: prometheus-k8s-0 started at 2022-05-06 20:23:29 +0000 UTC (0+4 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container config-reloader ready: true, restart count 0
May  6 23:30:24.321: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
May  6 23:30:24.321: INFO: 	Container grafana ready: true, restart count 0
May  6 23:30:24.321: INFO: 	Container prometheus ready: true, restart count 1
May  6 23:30:24.321: INFO: pod-client started at 2022-05-06 23:29:11 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container pod-client ready: true, restart count 0
May  6 23:30:24.321: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container kube-sriovdp ready: true, restart count 0
May  6 23:30:24.321: INFO: cmk-init-discover-node1-tp69t started at 2022-05-06 20:21:33 +0000 UTC (0+3 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container discover ready: false, restart count 0
May  6 23:30:24.321: INFO: 	Container init ready: false, restart count 0
May  6 23:30:24.321: INFO: 	Container install ready: false, restart count 0
May  6 23:30:24.321: INFO: cmk-trkp8 started at 2022-05-06 20:22:16 +0000 UTC (0+2 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container nodereport ready: true, restart count 0
May  6 23:30:24.321: INFO: 	Container reconcile ready: true, restart count 0
May  6 23:30:24.321: INFO: up-down-2-mnchx started at 2022-05-06 23:29:40 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.321: INFO: 	Container up-down-2 ready: true, restart count 0
May  6 23:30:24.667: INFO: 
Latency metrics for node node1
May  6 23:30:24.667: INFO: 
Logging node info for node node2
May  6 23:30:24.670: INFO: Node Info: &Node{ObjectMeta:{node2    2dab2a66-f2eb-49db-9725-3dda82cede11 75955 0 2022-05-06 20:09:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 22:28:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-05-06 22:30:11 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:27 +0000 UTC,LastTransitionTime:2022-05-06 20:13:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 23:30:18 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 23:30:18 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 23:30:18 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 23:30:18 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c77ab26e59394c64a4d3ca530c1cefb5,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0fe5c664-0bc1-49bd-8b38-c77825eebe76,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May  6 23:30:24.670: INFO: 
Logging kubelet events for node node2
May  6 23:30:24.672: INFO: 
Logging pods the kubelet thinks is on node node2
May  6 23:30:24.693: INFO: node-feature-discovery-worker-8phhs started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.693: INFO: 	Container nfd-worker ready: true, restart count 0
May  6 23:30:24.693: INFO: service-headless-toggled-28sjx started at 2022-05-06 23:29:18 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.693: INFO: 	Container service-headless-toggled ready: true, restart count 0
May  6 23:30:24.693: INFO: service-headless-toggled-xzmn9 started at 2022-05-06 23:29:18 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.693: INFO: 	Container service-headless-toggled ready: true, restart count 0
May  6 23:30:24.693: INFO: kube-multus-ds-amd64-gtzj9 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.693: INFO: 	Container kube-multus ready: true, restart count 1
May  6 23:30:24.693: INFO: cmk-cb5rv started at 2022-05-06 20:22:17 +0000 UTC (0+2 container statuses recorded)
May  6 23:30:24.693: INFO: 	Container nodereport ready: true, restart count 0
May  6 23:30:24.693: INFO: 	Container reconcile ready: true, restart count 0
May  6 23:30:24.693: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 started at 2022-05-06 20:26:21 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.693: INFO: 	Container tas-extender ready: true, restart count 0
May  6 23:30:24.693: INFO: pod-server-1 started at 2022-05-06 23:29:15 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.693: INFO: 	Container agnhost-container ready: true, restart count 0
May  6 23:30:24.693: INFO: kube-proxy-g77fj started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.693: INFO: 	Container kube-proxy ready: true, restart count 2
May  6 23:30:24.693: INFO: nodeport-update-service-kn6q6 started at 2022-05-06 23:29:45 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.694: INFO: 	Container nodeport-update-service ready: true, restart count 0
May  6 23:30:24.694: INFO: collectd-mbz88 started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded)
May  6 23:30:24.694: INFO: 	Container collectd ready: true, restart count 0
May  6 23:30:24.694: INFO: 	Container collectd-exporter ready: true, restart count 0
May  6 23:30:24.694: INFO: 	Container rbac-proxy ready: true, restart count 0
May  6 23:30:24.694: INFO: nodeport-update-service-qr6pm started at 2022-05-06 23:29:45 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.694: INFO: 	Container nodeport-update-service ready: true, restart count 0
May  6 23:30:24.694: INFO: service-headless-toggled-25qb7 started at 2022-05-06 23:29:18 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.694: INFO: 	Container service-headless-toggled ready: true, restart count 0
May  6 23:30:24.694: INFO: cmk-init-discover-node2-kt2nj started at 2022-05-06 20:21:53 +0000 UTC (0+3 container statuses recorded)
May  6 23:30:24.694: INFO: 	Container discover ready: false, restart count 0
May  6 23:30:24.694: INFO: 	Container init ready: false, restart count 0
May  6 23:30:24.694: INFO: 	Container install ready: false, restart count 0
May  6 23:30:24.694: INFO: node-exporter-4xqmj started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded)
May  6 23:30:24.694: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May  6 23:30:24.694: INFO: 	Container node-exporter ready: true, restart count 0
May  6 23:30:24.694: INFO: up-down-1-dp98k started at 2022-05-06 23:29:25 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.694: INFO: 	Container up-down-1 ready: false, restart count 0
May  6 23:30:24.694: INFO: cmk-webhook-6c9d5f8578-vllpr started at 2022-05-06 20:22:17 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.694: INFO: 	Container cmk-webhook ready: true, restart count 0
May  6 23:30:24.694: INFO: nginx-proxy-node2 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.694: INFO: 	Container nginx-proxy ready: true, restart count 2
May  6 23:30:24.694: INFO: kube-flannel-ffwfn started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded)
May  6 23:30:24.694: INFO: 	Init container install-cni ready: true, restart count 1
May  6 23:30:24.694: INFO: 	Container kube-flannel ready: true, restart count 2
May  6 23:30:24.694: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.694: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
May  6 23:30:24.694: INFO: kubernetes-dashboard-785dcbb76d-29wg6 started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.694: INFO: 	Container kubernetes-dashboard ready: true, restart count 2
May  6 23:30:24.694: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.694: INFO: 	Container kube-sriovdp ready: true, restart count 0
May  6 23:30:24.694: INFO: service-headless-7t7p4 started at 2022-05-06 23:29:09 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.694: INFO: 	Container service-headless ready: true, restart count 0
May  6 23:30:24.694: INFO: service-headless-5mt29 started at 2022-05-06 23:29:09 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.694: INFO: 	Container service-headless ready: true, restart count 0
May  6 23:30:24.694: INFO: up-down-1-fsz7f started at 2022-05-06 23:29:25 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.694: INFO: 	Container up-down-1 ready: false, restart count 0
May  6 23:30:24.694: INFO: up-down-2-j4bdf started at 2022-05-06 23:29:40 +0000 UTC (0+1 container statuses recorded)
May  6 23:30:24.694: INFO: 	Container up-down-2 ready: true, restart count 0
May  6 23:30:24.966: INFO: 
Latency metrics for node node2
May  6 23:30:24.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-1783" for this suite.


• Failure [73.167 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130

  May  6 23:30:23.933: Failed to connect to backend 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":1,"skipped":76,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}
May  6 23:30:24.978: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:25.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
STEP: creating up-down-1 in namespace services-1731
STEP: creating service up-down-1 in namespace services-1731
STEP: creating replication controller up-down-1 in namespace services-1731
I0506 23:29:25.246679      26 runners.go:190] Created replication controller with name: up-down-1, namespace: services-1731, replica count: 3
I0506 23:29:28.297545      26 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:29:31.298350      26 runners.go:190] up-down-1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:29:34.299191      26 runners.go:190] up-down-1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:29:37.299713      26 runners.go:190] up-down-1 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:29:40.300659      26 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating up-down-2 in namespace services-1731
STEP: creating service up-down-2 in namespace services-1731
STEP: creating replication controller up-down-2 in namespace services-1731
I0506 23:29:40.314107      26 runners.go:190] Created replication controller with name: up-down-2, namespace: services-1731, replica count: 3
I0506 23:29:43.365139      26 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:29:46.365851      26 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-1 is up
May  6 23:29:46.368: INFO: Creating new host exec pod
May  6 23:29:46.381: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:48.383: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:29:50.383: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May  6 23:29:50.383: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May  6 23:30:00.400: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.61:80 2>&1 || true; echo; done" in pod services-1731/verify-service-up-host-exec-pod
May  6 23:30:00.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1731 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.61:80 2>&1 || true; echo; done'
May  6 23:30:00.833: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n"
May  6 23:30:00.833: INFO: stdout: "up-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-dp98k\n"
May  6 23:30:00.834: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.61:80 2>&1 || true; echo; done" in pod services-1731/verify-service-up-exec-pod-9d6g4
May  6 23:30:00.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1731 exec verify-service-up-exec-pod-9d6g4 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.61:80 2>&1 || true; echo; done'
May  6 23:30:01.252: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.61:80\n+ echo\n"
May  6 23:30:01.252: INFO: stdout: "up-down-1-dp98k\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-l2jz2\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-dp98k\nup-down-1-fsz7f\nup-down-1-l2jz2\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1731
STEP: Deleting pod verify-service-up-exec-pod-9d6g4 in namespace services-1731
STEP: verifying service up-down-2 is up
May  6 23:30:01.266: INFO: Creating new host exec pod
May  6 23:30:01.281: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:03.286: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:05.286: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:07.284: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:09.285: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:11.286: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:13.284: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:15.283: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May  6 23:30:15.283: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May  6 23:30:19.302: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.198:80 2>&1 || true; echo; done" in pod services-1731/verify-service-up-host-exec-pod
May  6 23:30:19.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1731 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.198:80 2>&1 || true; echo; done'
May  6 23:30:19.644: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n"
May  6 23:30:19.644: INFO: stdout: "up-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\n"
May  6 23:30:19.644: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.198:80 2>&1 || true; echo; done" in pod services-1731/verify-service-up-exec-pod-dc2gc
May  6 23:30:19.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1731 exec verify-service-up-exec-pod-dc2gc -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.198:80 2>&1 || true; echo; done'
May  6 23:30:20.038: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n"
May  6 23:30:20.038: INFO: stdout: "up-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1731
STEP: Deleting pod verify-service-up-exec-pod-dc2gc in namespace services-1731
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-1731, will wait for the garbage collector to delete the pods
May  6 23:30:20.111: INFO: Deleting ReplicationController up-down-1 took: 4.267877ms
May  6 23:30:20.212: INFO: Terminating ReplicationController up-down-1 pods took: 100.728575ms
STEP: verifying service up-down-1 is not up
May  6 23:30:26.723: INFO: Creating new host exec pod
May  6 23:30:26.737: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:28.741: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:30.740: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:32.742: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
May  6 23:30:32.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1731 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.4.61:80 && echo service-down-failed'
May  6 23:30:35.018: INFO: rc: 28
May  6 23:30:35.018: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.4.61:80 && echo service-down-failed" in pod services-1731/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1731 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.4.61:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.4.61:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1731
STEP: verifying service up-down-2 is still up
May  6 23:30:35.026: INFO: Creating new host exec pod
May  6 23:30:35.039: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:37.044: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May  6 23:30:37.044: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May  6 23:30:41.063: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.198:80 2>&1 || true; echo; done" in pod services-1731/verify-service-up-host-exec-pod
May  6 23:30:41.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1731 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.198:80 2>&1 || true; echo; done'
May  6 23:30:41.460: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n"
May  6 23:30:41.460: INFO: stdout: "up-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\n"
May  6 23:30:41.461: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.198:80 2>&1 || true; echo; done" in pod services-1731/verify-service-up-exec-pod-8pkk2
May  6 23:30:41.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1731 exec verify-service-up-exec-pod-8pkk2 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.198:80 2>&1 || true; echo; done'
May  6 23:30:41.838: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n"
May  6 23:30:41.838: INFO: stdout: "up-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1731
STEP: Deleting pod verify-service-up-exec-pod-8pkk2 in namespace services-1731
STEP: creating service up-down-3 in namespace services-1731
STEP: creating service up-down-3 in namespace services-1731
STEP: creating replication controller up-down-3 in namespace services-1731
I0506 23:30:41.862994      26 runners.go:190] Created replication controller with name: up-down-3, namespace: services-1731, replica count: 3
I0506 23:30:44.913903      26 runners.go:190] up-down-3 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:30:47.916809      26 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-2 is still up
May  6 23:30:47.919: INFO: Creating new host exec pod
May  6 23:30:47.932: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:49.935: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May  6 23:30:49.935: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May  6 23:30:53.952: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.198:80 2>&1 || true; echo; done" in pod services-1731/verify-service-up-host-exec-pod
May  6 23:30:53.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1731 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.198:80 2>&1 || true; echo; done'
May  6 23:30:54.322: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n"
May  6 23:30:54.322: INFO: stdout: "up-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\n"
May  6 23:30:54.322: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.198:80 2>&1 || true; echo; done" in pod services-1731/verify-service-up-exec-pod-fvfmn
May  6 23:30:54.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1731 exec verify-service-up-exec-pod-fvfmn -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.198:80 2>&1 || true; echo; done'
May  6 23:30:54.717: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.198:80\n+ echo\n"
May  6 23:30:54.718: INFO: stdout: "up-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-kdf4x\nup-down-2-j4bdf\nup-down-2-mnchx\nup-down-2-mnchx\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1731
STEP: Deleting pod verify-service-up-exec-pod-fvfmn in namespace services-1731
STEP: verifying service up-down-3 is up
May  6 23:30:54.732: INFO: Creating new host exec pod
May  6 23:30:54.744: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:56.748: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:30:58.748: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:31:00.747: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:31:02.750: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:31:04.749: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:31:06.748: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:31:08.747: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
May  6 23:31:10.749: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
May  6 23:31:10.749: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
May  6 23:31:14.770: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.25.157:80 2>&1 || true; echo; done" in pod services-1731/verify-service-up-host-exec-pod
May  6 23:31:14.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1731 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.25.157:80 2>&1 || true; echo; done'
May  6 23:31:15.141: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n"
May  6 23:31:15.142: INFO: stdout: "up-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-9bsl7\n"
May  6 23:31:15.142: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.25.157:80 2>&1 || true; echo; done" in pod services-1731/verify-service-up-exec-pod-bcgr4
May  6 23:31:15.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1731 exec verify-service-up-exec-pod-bcgr4 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.25.157:80 2>&1 || true; echo; done'
May  6 23:31:15.529: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.157:80\n+ echo\n"
May  6 23:31:15.530: INFO: stdout: "up-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-9bsl7\nup-down-3-99zzk\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-9bsl7\nup-down-3-9bsl7\nup-down-3-r7lrc\nup-down-3-99zzk\nup-down-3-99zzk\nup-down-3-99zzk\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1731
STEP: Deleting pod verify-service-up-exec-pod-bcgr4 in namespace services-1731
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May  6 23:31:15.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1731" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:110.341 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":2,"skipped":378,"failed":0}
May  6 23:31:15.561: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May  6 23:29:45.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-185
May  6 23:29:45.613: INFO: Service Port TCP: 80
STEP: changing the TCP service to type=NodePort
STEP: creating replication controller nodeport-update-service in namespace services-185
I0506 23:29:45.625311      31 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-185, replica count: 2
I0506 23:29:48.676412      31 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:29:51.676689      31 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0506 23:29:54.677368      31 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  6 23:29:54.677: INFO: Creating new exec pod
May  6 23:30:03.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
May  6 23:30:03.994: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-update-service 80\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
May  6 23:30:03.994: INFO: stdout: "nodeport-update-service-kn6q6"
May  6 23:30:03.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.63.2 80'
May  6 23:30:04.237: INFO: stderr: "+ nc -v -t -w 2 10.233.63.2 80\n+ echo hostName\nConnection to 10.233.63.2 80 port [tcp/http] succeeded!\n"
May  6 23:30:04.237: INFO: stdout: "nodeport-update-service-qr6pm"
May  6 23:30:04.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:04.496: INFO: rc: 1
May  6 23:30:04.496: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:05.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:05.727: INFO: rc: 1
May  6 23:30:05.727: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:06.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:06.773: INFO: rc: 1
May  6 23:30:06.773: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:07.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:07.929: INFO: rc: 1
May  6 23:30:07.929: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:08.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:08.946: INFO: rc: 1
May  6 23:30:08.946: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:09.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:09.818: INFO: rc: 1
May  6 23:30:09.818: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:10.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:10.833: INFO: rc: 1
May  6 23:30:10.833: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:11.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:11.753: INFO: rc: 1
May  6 23:30:11.754: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:12.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:12.790: INFO: rc: 1
May  6 23:30:12.790: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:13.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:13.949: INFO: rc: 1
May  6 23:30:13.949: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:14.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:14.751: INFO: rc: 1
May  6 23:30:14.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30827
+ echo hostName
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:15.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:15.756: INFO: rc: 1
May  6 23:30:15.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:16.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:16.775: INFO: rc: 1
May  6 23:30:16.775: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:17.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:17.753: INFO: rc: 1
May  6 23:30:17.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:18.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:18.755: INFO: rc: 1
May  6 23:30:18.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:19.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:19.740: INFO: rc: 1
May  6 23:30:19.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:20.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:20.749: INFO: rc: 1
May  6 23:30:20.749: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:21.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:21.735: INFO: rc: 1
May  6 23:30:21.735: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:22.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:22.914: INFO: rc: 1
May  6 23:30:22.914: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:23.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:23.722: INFO: rc: 1
May  6 23:30:23.722: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:24.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:24.768: INFO: rc: 1
May  6 23:30:24.768: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:25.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:25.737: INFO: rc: 1
May  6 23:30:25.737: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:26.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:27.047: INFO: rc: 1
May  6 23:30:27.047: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:27.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:27.732: INFO: rc: 1
May  6 23:30:27.732: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:28.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:28.723: INFO: rc: 1
May  6 23:30:28.723: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:29.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:29.763: INFO: rc: 1
May  6 23:30:29.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:30.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:30.770: INFO: rc: 1
May  6 23:30:30.770: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:31.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:31.726: INFO: rc: 1
May  6 23:30:31.726: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:32.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:32.754: INFO: rc: 1
May  6 23:30:32.754: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:33.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:33.765: INFO: rc: 1
May  6 23:30:33.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:34.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:34.743: INFO: rc: 1
May  6 23:30:34.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30827
+ echo hostName
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:35.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:35.728: INFO: rc: 1
May  6 23:30:35.728: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:36.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:36.774: INFO: rc: 1
May  6 23:30:36.774: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:37.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:37.771: INFO: rc: 1
May  6 23:30:37.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:38.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:38.766: INFO: rc: 1
May  6 23:30:38.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:39.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:39.745: INFO: rc: 1
May  6 23:30:39.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30827
+ echo hostName
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:40.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:40.772: INFO: rc: 1
May  6 23:30:40.772: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:41.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:41.755: INFO: rc: 1
May  6 23:30:41.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:42.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:42.807: INFO: rc: 1
May  6 23:30:42.808: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:43.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:44.279: INFO: rc: 1
May  6 23:30:44.279: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:44.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:44.847: INFO: rc: 1
May  6 23:30:44.847: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:45.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:45.733: INFO: rc: 1
May  6 23:30:45.733: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:46.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:46.769: INFO: rc: 1
May  6 23:30:46.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:47.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:47.755: INFO: rc: 1
May  6 23:30:47.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:48.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:48.759: INFO: rc: 1
May  6 23:30:48.759: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:49.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:49.758: INFO: rc: 1
May  6 23:30:49.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:50.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:50.752: INFO: rc: 1
May  6 23:30:50.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:51.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:51.734: INFO: rc: 1
May  6 23:30:51.734: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:52.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:52.741: INFO: rc: 1
May  6 23:30:52.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:53.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:53.741: INFO: rc: 1
May  6 23:30:53.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:54.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:54.831: INFO: rc: 1
May  6 23:30:54.831: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:55.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:55.741: INFO: rc: 1
May  6 23:30:55.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:56.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:57.040: INFO: rc: 1
May  6 23:30:57.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:57.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:57.756: INFO: rc: 1
May  6 23:30:57.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:30:58.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:30:59.852: INFO: rc: 1
May  6 23:30:59.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:00.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:00.748: INFO: rc: 1
May  6 23:31:00.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:01.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:01.778: INFO: rc: 1
May  6 23:31:01.778: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:02.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:02.885: INFO: rc: 1
May  6 23:31:02.885: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:03.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:03.745: INFO: rc: 1
May  6 23:31:03.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:04.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:04.757: INFO: rc: 1
May  6 23:31:04.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:05.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:05.772: INFO: rc: 1
May  6 23:31:05.772: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:06.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:06.782: INFO: rc: 1
May  6 23:31:06.782: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:07.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:07.740: INFO: rc: 1
May  6 23:31:07.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:08.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:08.758: INFO: rc: 1
May  6 23:31:08.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:09.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:09.745: INFO: rc: 1
May  6 23:31:09.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ nc -v -t -w+  2 10.10.190.207 30827
echo hostName
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:10.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:10.751: INFO: rc: 1
May  6 23:31:10.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:11.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:11.745: INFO: rc: 1
May  6 23:31:11.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:12.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:12.770: INFO: rc: 1
May  6 23:31:12.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:13.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:13.743: INFO: rc: 1
May  6 23:31:13.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:14.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:14.774: INFO: rc: 1
May  6 23:31:14.775: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:15.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:15.763: INFO: rc: 1
May  6 23:31:15.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:16.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:16.770: INFO: rc: 1
May  6 23:31:16.770: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:17.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:17.756: INFO: rc: 1
May  6 23:31:17.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:18.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:18.747: INFO: rc: 1
May  6 23:31:18.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:19.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:19.748: INFO: rc: 1
May  6 23:31:19.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:20.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:20.763: INFO: rc: 1
May  6 23:31:20.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:21.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:21.757: INFO: rc: 1
May  6 23:31:21.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:22.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:22.764: INFO: rc: 1
May  6 23:31:22.764: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:23.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:24.063: INFO: rc: 1
May  6 23:31:24.063: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:24.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:25.071: INFO: rc: 1
May  6 23:31:25.071: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:25.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:25.761: INFO: rc: 1
May  6 23:31:25.761: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:26.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:27.045: INFO: rc: 1
May  6 23:31:27.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ + ncecho -v hostName -t
 -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:27.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:27.747: INFO: rc: 1
May  6 23:31:27.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:28.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:28.762: INFO: rc: 1
May  6 23:31:28.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:29.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:29.735: INFO: rc: 1
May  6 23:31:29.735: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:30.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:30.782: INFO: rc: 1
May  6 23:31:30.782: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:31.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:31.742: INFO: rc: 1
May  6 23:31:31.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:32.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:32.740: INFO: rc: 1
May  6 23:31:32.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:33.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:33.747: INFO: rc: 1
May  6 23:31:33.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:34.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:34.773: INFO: rc: 1
May  6 23:31:34.773: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:35.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:35.745: INFO: rc: 1
May  6 23:31:35.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:36.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:36.741: INFO: rc: 1
May  6 23:31:36.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:37.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:37.728: INFO: rc: 1
May  6 23:31:37.728: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:38.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:38.756: INFO: rc: 1
May  6 23:31:38.756: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30827
+ echo hostName
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:39.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:39.746: INFO: rc: 1
May  6 23:31:39.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:40.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:40.764: INFO: rc: 1
May  6 23:31:40.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:41.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:41.798: INFO: rc: 1
May  6 23:31:41.798: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:42.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:42.758: INFO: rc: 1
May  6 23:31:42.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:43.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:43.742: INFO: rc: 1
May  6 23:31:43.742: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:44.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:44.771: INFO: rc: 1
May  6 23:31:44.771: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:45.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:45.762: INFO: rc: 1
May  6 23:31:45.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:46.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:46.836: INFO: rc: 1
May  6 23:31:46.836: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:47.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:47.759: INFO: rc: 1
May  6 23:31:47.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:48.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:48.763: INFO: rc: 1
May  6 23:31:48.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:49.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:49.777: INFO: rc: 1
May  6 23:31:49.777: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:50.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:50.765: INFO: rc: 1
May  6 23:31:50.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:51.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:51.753: INFO: rc: 1
May  6 23:31:51.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:52.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:52.747: INFO: rc: 1
May  6 23:31:52.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:53.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:53.734: INFO: rc: 1
May  6 23:31:53.734: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:54.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:54.753: INFO: rc: 1
May  6 23:31:54.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:55.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:55.736: INFO: rc: 1
May  6 23:31:55.736: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:56.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:57.162: INFO: rc: 1
May  6 23:31:57.162: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:57.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:57.753: INFO: rc: 1
May  6 23:31:57.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:58.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:58.754: INFO: rc: 1
May  6 23:31:58.754: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:31:59.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:31:59.752: INFO: rc: 1
May  6 23:31:59.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:32:00.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:32:00.760: INFO: rc: 1
May  6 23:32:00.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:32:01.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:32:01.732: INFO: rc: 1
May  6 23:32:01.732: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:32:02.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:32:02.751: INFO: rc: 1
May  6 23:32:02.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:32:03.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:32:03.724: INFO: rc: 1
May  6 23:32:03.724: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:32:04.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:32:04.763: INFO: rc: 1
May  6 23:32:04.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:32:04.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827'
May  6 23:32:05.025: INFO: rc: 1
May  6 23:32:05.025: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-185 exec execpodlzmnv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30827:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30827
nc: connect to 10.10.190.207 port 30827 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
May  6 23:32:05.025: FAIL: Unexpected error:
    <*errors.errorString | 0xc0049be230>: {
        s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30827 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30827 over TCP protocol
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.13()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003406d80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc003406d80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc003406d80, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
May  6 23:32:05.027: INFO: Cleaning up the updating NodePorts test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-185".
STEP: Found 17 events.
May  6 23:32:05.055: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodlzmnv: { } Scheduled: Successfully assigned services-185/execpodlzmnv to node1
May  6 23:32:05.055: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-kn6q6: { } Scheduled: Successfully assigned services-185/nodeport-update-service-kn6q6 to node2
May  6 23:32:05.055: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-qr6pm: { } Scheduled: Successfully assigned services-185/nodeport-update-service-qr6pm to node2
May  6 23:32:05.055: INFO: At 2022-05-06 23:29:45 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-qr6pm
May  6 23:32:05.055: INFO: At 2022-05-06 23:29:45 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-kn6q6
May  6 23:32:05.055: INFO: At 2022-05-06 23:29:49 +0000 UTC - event for nodeport-update-service-kn6q6: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May  6 23:32:05.055: INFO: At 2022-05-06 23:29:49 +0000 UTC - event for nodeport-update-service-qr6pm: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May  6 23:32:05.055: INFO: At 2022-05-06 23:29:50 +0000 UTC - event for nodeport-update-service-kn6q6: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 363.747277ms
May  6 23:32:05.055: INFO: At 2022-05-06 23:29:50 +0000 UTC - event for nodeport-update-service-kn6q6: {kubelet node2} Started: Started container nodeport-update-service
May  6 23:32:05.055: INFO: At 2022-05-06 23:29:50 +0000 UTC - event for nodeport-update-service-kn6q6: {kubelet node2} Created: Created container nodeport-update-service
May  6 23:32:05.055: INFO: At 2022-05-06 23:29:50 +0000 UTC - event for nodeport-update-service-qr6pm: {kubelet node2} Started: Started container nodeport-update-service
May  6 23:32:05.055: INFO: At 2022-05-06 23:29:50 +0000 UTC - event for nodeport-update-service-qr6pm: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 603.467263ms
May  6 23:32:05.055: INFO: At 2022-05-06 23:29:50 +0000 UTC - event for nodeport-update-service-qr6pm: {kubelet node2} Created: Created container nodeport-update-service
May  6 23:32:05.055: INFO: At 2022-05-06 23:29:57 +0000 UTC - event for execpodlzmnv: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
May  6 23:32:05.055: INFO: At 2022-05-06 23:29:58 +0000 UTC - event for execpodlzmnv: {kubelet node1} Started: Started container agnhost-container
May  6 23:32:05.055: INFO: At 2022-05-06 23:29:58 +0000 UTC - event for execpodlzmnv: {kubelet node1} Created: Created container agnhost-container
May  6 23:32:05.055: INFO: At 2022-05-06 23:29:58 +0000 UTC - event for execpodlzmnv: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 291.249517ms
May  6 23:32:05.058: INFO: POD                            NODE   PHASE    GRACE  CONDITIONS
May  6 23:32:05.058: INFO: execpodlzmnv                   node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:30:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:30:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:54 +0000 UTC  }]
May  6 23:32:05.058: INFO: nodeport-update-service-kn6q6  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:45 +0000 UTC  }]
May  6 23:32:05.058: INFO: nodeport-update-service-qr6pm  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 23:29:45 +0000 UTC  }]
May  6 23:32:05.058: INFO: 
May  6 23:32:05.062: INFO: 
Logging node info for node master1
May  6 23:32:05.065: INFO: Node Info: &Node{ObjectMeta:{master1    3ea7d7b2-d1dd-4f70-bd03-4c3ec5a8e02c 76715 0 2022-05-06 20:07:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-06 20:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:15:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 23:31:56 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 23:31:56 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 23:31:56 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 23:31:56 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fddab730508c43d4ba9efb575f362bc6,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8708efb4-3ff3-4f9b-a116-eb7702a71201,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May  6 23:32:05.066: INFO: 
Logging kubelet events for node master1
May  6 23:32:05.068: INFO: 
Logging pods the kubelet thinks is on node master1
May  6 23:32:05.112: INFO: kube-apiserver-master1 started at 2022-05-06 20:08:39 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.112: INFO: 	Container kube-apiserver ready: true, restart count 0
May  6 23:32:05.112: INFO: kube-controller-manager-master1 started at 2022-05-06 20:16:36 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.112: INFO: 	Container kube-controller-manager ready: true, restart count 2
May  6 23:32:05.112: INFO: kube-multus-ds-amd64-pdpj8 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.112: INFO: 	Container kube-multus ready: true, restart count 1
May  6 23:32:05.112: INFO: node-exporter-6wcwp started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded)
May  6 23:32:05.112: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May  6 23:32:05.112: INFO: 	Container node-exporter ready: true, restart count 0
May  6 23:32:05.112: INFO: container-registry-65d7c44b96-5pp99 started at 2022-05-06 20:14:46 +0000 UTC (0+2 container statuses recorded)
May  6 23:32:05.112: INFO: 	Container docker-registry ready: true, restart count 0
May  6 23:32:05.112: INFO: 	Container nginx ready: true, restart count 0
May  6 23:32:05.112: INFO: kube-scheduler-master1 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.112: INFO: 	Container kube-scheduler ready: true, restart count 0
May  6 23:32:05.112: INFO: kube-proxy-bnqzh started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.112: INFO: 	Container kube-proxy ready: true, restart count 2
May  6 23:32:05.112: INFO: kube-flannel-dz2ld started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded)
May  6 23:32:05.112: INFO: 	Init container install-cni ready: true, restart count 0
May  6 23:32:05.112: INFO: 	Container kube-flannel ready: true, restart count 1
May  6 23:32:05.112: INFO: coredns-8474476ff8-jtj8t started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.112: INFO: 	Container coredns ready: true, restart count 1
May  6 23:32:05.209: INFO: 
Latency metrics for node master1
May  6 23:32:05.209: INFO: 
Logging node info for node master2
May  6 23:32:05.212: INFO: Node Info: &Node{ObjectMeta:{master2    0aed38bc-6408-4920-b364-7d6b9bff7102 76729 0 2022-05-06 20:08:00 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-06 20:08:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 23:32:02 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 23:32:02 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 23:32:02 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 23:32:02 +0000 UTC,LastTransitionTime:2022-05-06 20:13:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94f6743f72cc461cb731cffce21ae835,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:340a40ae-5d7c-47da-a6f4-a4b5b64d56f7,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May  6 23:32:05.213: INFO: 
Logging kubelet events for node master2
May  6 23:32:05.215: INFO: 
Logging pods the kubelet thinks is on node master2
May  6 23:32:05.231: INFO: node-exporter-b26kc started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded)
May  6 23:32:05.231: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May  6 23:32:05.231: INFO: 	Container node-exporter ready: true, restart count 0
May  6 23:32:05.231: INFO: kube-controller-manager-master2 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.231: INFO: 	Container kube-controller-manager ready: true, restart count 1
May  6 23:32:05.231: INFO: kube-proxy-tr8m9 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.231: INFO: 	Container kube-proxy ready: true, restart count 2
May  6 23:32:05.231: INFO: dns-autoscaler-7df78bfcfb-srh4b started at 2022-05-06 20:10:54 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.231: INFO: 	Container autoscaler ready: true, restart count 1
May  6 23:32:05.231: INFO: kube-multus-ds-amd64-gd6zv started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.231: INFO: 	Container kube-multus ready: true, restart count 1
May  6 23:32:05.231: INFO: kube-scheduler-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.231: INFO: 	Container kube-scheduler ready: true, restart count 2
May  6 23:32:05.231: INFO: kube-apiserver-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.231: INFO: 	Container kube-apiserver ready: true, restart count 0
May  6 23:32:05.231: INFO: kube-flannel-4kjc4 started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded)
May  6 23:32:05.231: INFO: 	Init container install-cni ready: true, restart count 0
May  6 23:32:05.231: INFO: 	Container kube-flannel ready: true, restart count 1
May  6 23:32:05.318: INFO: 
Latency metrics for node master2
May  6 23:32:05.318: INFO: 
Logging node info for node master3
May  6 23:32:05.320: INFO: Node Info: &Node{ObjectMeta:{master3    1cc41c26-3708-4912-8ff5-aa83b70d989e 76724 0 2022-05-06 20:08:11 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-05-06 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-06 20:09:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:17:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-05-06 20:18:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:10 +0000 UTC,LastTransitionTime:2022-05-06 20:13:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 23:31:59 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 23:31:59 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 23:31:59 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 23:31:59 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:045e9ce9dfcd42ef970e1ed3a55941b3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:ee1f3fa6-4f8f-4726-91f5-b87ee8838a88,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May  6 23:32:05.321: INFO: 
Logging kubelet events for node master3
May  6 23:32:05.323: INFO: 
Logging pods the kubelet thinks is on node master3
May  6 23:32:05.344: INFO: kube-apiserver-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.345: INFO: 	Container kube-apiserver ready: true, restart count 0
May  6 23:32:05.345: INFO: kube-multus-ds-amd64-mtj2t started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.345: INFO: 	Container kube-multus ready: true, restart count 1
May  6 23:32:05.345: INFO: coredns-8474476ff8-t4bcd started at 2022-05-06 20:10:52 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.345: INFO: 	Container coredns ready: true, restart count 1
May  6 23:32:05.345: INFO: node-feature-discovery-controller-cff799f9f-rwzfc started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.345: INFO: 	Container nfd-controller ready: true, restart count 0
May  6 23:32:05.345: INFO: kube-controller-manager-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.345: INFO: 	Container kube-controller-manager ready: true, restart count 3
May  6 23:32:05.345: INFO: kube-scheduler-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.345: INFO: 	Container kube-scheduler ready: true, restart count 2
May  6 23:32:05.345: INFO: kube-proxy-m9tv5 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.345: INFO: 	Container kube-proxy ready: true, restart count 2
May  6 23:32:05.345: INFO: kube-flannel-2twpc started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded)
May  6 23:32:05.345: INFO: 	Init container install-cni ready: true, restart count 2
May  6 23:32:05.345: INFO: 	Container kube-flannel ready: true, restart count 1
May  6 23:32:05.345: INFO: node-exporter-mcj6x started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded)
May  6 23:32:05.345: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May  6 23:32:05.345: INFO: 	Container node-exporter ready: true, restart count 0
May  6 23:32:05.432: INFO: 
Latency metrics for node master3
May  6 23:32:05.432: INFO: 
Logging node info for node node1
May  6 23:32:05.436: INFO: Node Info: &Node{ObjectMeta:{node1    851b0a69-efd4-49b7-98ef-f0cfe2d311c6 76726 0 2022-05-06 20:09:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 22:27:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:24 +0000 UTC,LastTransitionTime:2022-05-06 20:13:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 23:32:00 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 23:32:00 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 23:32:00 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 23:32:00 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bae6af61b07b462daf118753f89950b1,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:871de03d-49a7-4910-8d15-63422e0e629a,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:859ab6768a6f26a79bc42b231664111317d095a4f04e4b6fe79ce37b3d199097 nginx:latest],SizeBytes:141522124,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May  6 23:32:05.437: INFO: 
Logging kubelet events for node node1
May  6 23:32:05.438: INFO: 
Logging pods the kubelet thinks is on node node1
May  6 23:32:05.459: INFO: prometheus-operator-585ccfb458-vrrfv started at 2022-05-06 20:23:12 +0000 UTC (0+2 container statuses recorded)
May  6 23:32:05.459: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May  6 23:32:05.459: INFO: 	Container prometheus-operator ready: true, restart count 0
May  6 23:32:05.459: INFO: prometheus-k8s-0 started at 2022-05-06 20:23:29 +0000 UTC (0+4 container statuses recorded)
May  6 23:32:05.459: INFO: 	Container config-reloader ready: true, restart count 0
May  6 23:32:05.459: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
May  6 23:32:05.459: INFO: 	Container grafana ready: true, restart count 0
May  6 23:32:05.459: INFO: 	Container prometheus ready: true, restart count 1
May  6 23:32:05.459: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.459: INFO: 	Container kube-sriovdp ready: true, restart count 0
May  6 23:32:05.459: INFO: cmk-init-discover-node1-tp69t started at 2022-05-06 20:21:33 +0000 UTC (0+3 container statuses recorded)
May  6 23:32:05.459: INFO: 	Container discover ready: false, restart count 0
May  6 23:32:05.459: INFO: 	Container init ready: false, restart count 0
May  6 23:32:05.459: INFO: 	Container install ready: false, restart count 0
May  6 23:32:05.459: INFO: cmk-trkp8 started at 2022-05-06 20:22:16 +0000 UTC (0+2 container statuses recorded)
May  6 23:32:05.459: INFO: 	Container nodereport ready: true, restart count 0
May  6 23:32:05.459: INFO: 	Container reconcile ready: true, restart count 0
May  6 23:32:05.459: INFO: nginx-proxy-node1 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.459: INFO: 	Container nginx-proxy ready: true, restart count 2
May  6 23:32:05.459: INFO: kube-proxy-xc75d started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.459: INFO: 	Container kube-proxy ready: true, restart count 2
May  6 23:32:05.459: INFO: kube-flannel-ph67x started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded)
May  6 23:32:05.459: INFO: 	Init container install-cni ready: true, restart count 2
May  6 23:32:05.459: INFO: 	Container kube-flannel ready: true, restart count 3
May  6 23:32:05.459: INFO: node-feature-discovery-worker-fbf8d started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.459: INFO: 	Container nfd-worker ready: true, restart count 0
May  6 23:32:05.459: INFO: execpodlzmnv started at 2022-05-06 23:29:55 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.459: INFO: 	Container agnhost-container ready: true, restart count 0
May  6 23:32:05.459: INFO: kube-multus-ds-amd64-2mv45 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.459: INFO: 	Container kube-multus ready: true, restart count 1
May  6 23:32:05.459: INFO: node-exporter-hqs4s started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded)
May  6 23:32:05.459: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May  6 23:32:05.459: INFO: 	Container node-exporter ready: true, restart count 0
May  6 23:32:05.459: INFO: collectd-wq9cz started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded)
May  6 23:32:05.459: INFO: 	Container collectd ready: true, restart count 0
May  6 23:32:05.459: INFO: 	Container collectd-exporter ready: true, restart count 0
May  6 23:32:05.459: INFO: 	Container rbac-proxy ready: true, restart count 0
May  6 23:32:05.631: INFO: 
Latency metrics for node node1
May  6 23:32:05.631: INFO: 
Logging node info for node node2
May  6 23:32:05.634: INFO: Node Info: &Node{ObjectMeta:{node2    2dab2a66-f2eb-49db-9725-3dda82cede11 76725 0 2022-05-06 20:09:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 22:28:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-05-06 22:30:11 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:27 +0000 UTC,LastTransitionTime:2022-05-06 20:13:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 23:31:59 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 23:31:59 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 23:31:59 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 23:31:59 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c77ab26e59394c64a4d3ca530c1cefb5,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0fe5c664-0bc1-49bd-8b38-c77825eebe76,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
May  6 23:32:05.636: INFO: 
Logging kubelet events for node node2
May  6 23:32:05.643: INFO: 
Logging pods the kubelet thinks is on node node2
May  6 23:32:05.661: INFO: node-feature-discovery-worker-8phhs started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.661: INFO: 	Container nfd-worker ready: true, restart count 0
May  6 23:32:05.661: INFO: kube-multus-ds-amd64-gtzj9 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.661: INFO: 	Container kube-multus ready: true, restart count 1
May  6 23:32:05.661: INFO: cmk-cb5rv started at 2022-05-06 20:22:17 +0000 UTC (0+2 container statuses recorded)
May  6 23:32:05.661: INFO: 	Container nodereport ready: true, restart count 0
May  6 23:32:05.661: INFO: 	Container reconcile ready: true, restart count 0
May  6 23:32:05.661: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 started at 2022-05-06 20:26:21 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.661: INFO: 	Container tas-extender ready: true, restart count 0
May  6 23:32:05.661: INFO: kube-proxy-g77fj started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.661: INFO: 	Container kube-proxy ready: true, restart count 2
May  6 23:32:05.661: INFO: nodeport-update-service-kn6q6 started at 2022-05-06 23:29:45 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.661: INFO: 	Container nodeport-update-service ready: true, restart count 0
May  6 23:32:05.661: INFO: collectd-mbz88 started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded)
May  6 23:32:05.661: INFO: 	Container collectd ready: true, restart count 0
May  6 23:32:05.661: INFO: 	Container collectd-exporter ready: true, restart count 0
May  6 23:32:05.661: INFO: 	Container rbac-proxy ready: true, restart count 0
May  6 23:32:05.661: INFO: nodeport-update-service-qr6pm started at 2022-05-06 23:29:45 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.661: INFO: 	Container nodeport-update-service ready: true, restart count 0
May  6 23:32:05.661: INFO: cmk-init-discover-node2-kt2nj started at 2022-05-06 20:21:53 +0000 UTC (0+3 container statuses recorded)
May  6 23:32:05.661: INFO: 	Container discover ready: false, restart count 0
May  6 23:32:05.661: INFO: 	Container init ready: false, restart count 0
May  6 23:32:05.661: INFO: 	Container install ready: false, restart count 0
May  6 23:32:05.661: INFO: node-exporter-4xqmj started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded)
May  6 23:32:05.661: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
May  6 23:32:05.661: INFO: 	Container node-exporter ready: true, restart count 0
May  6 23:32:05.661: INFO: kubernetes-dashboard-785dcbb76d-29wg6 started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.661: INFO: 	Container kubernetes-dashboard ready: true, restart count 2
May  6 23:32:05.661: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.661: INFO: 	Container kube-sriovdp ready: true, restart count 0
May  6 23:32:05.661: INFO: cmk-webhook-6c9d5f8578-vllpr started at 2022-05-06 20:22:17 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.661: INFO: 	Container cmk-webhook ready: true, restart count 0
May  6 23:32:05.661: INFO: nginx-proxy-node2 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.661: INFO: 	Container nginx-proxy ready: true, restart count 2
May  6 23:32:05.661: INFO: kube-flannel-ffwfn started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded)
May  6 23:32:05.661: INFO: 	Init container install-cni ready: true, restart count 1
May  6 23:32:05.661: INFO: 	Container kube-flannel ready: true, restart count 2
May  6 23:32:05.661: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded)
May  6 23:32:05.661: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
May  6 23:32:05.837: INFO: 
Latency metrics for node node2
May  6 23:32:05.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-185" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• Failure [140.261 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  May  6 23:32:05.026: Unexpected error:
      <*errors.errorString | 0xc0049be230>: {
          s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30827 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30827 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":2,"skipped":975,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
May  6 23:32:05.855: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-network] Networking should check kube-proxy urls","total":-1,"completed":4,"skipped":486,"failed":0}
May  6 23:29:46.710: INFO: Running AfterSuite actions on all nodes
May  6 23:32:05.944: INFO: Running AfterSuite actions on node 1
May  6 23:32:05.945: INFO: Skipping dumping logs from cluster



Summarizing 2 Failures:

[Fail] [sig-network] Conntrack [It] should be able to preserve UDP traffic when server pod cycles for a NodePort service 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

Ran 28 of 5773 Specs in 250.654 seconds
FAIL! -- 26 Passed | 2 Failed | 0 Pending | 5745 Skipped


Ginkgo ran 1 suite in 4m12.402549825s
Test Suite Failed