I0325 16:45:30.101404 7 e2e.go:129] Starting e2e run "ac67e6e1-fa3a-4164-8ca1-641114cf82b4" on Ginkgo node 1 {"msg":"Test Suite starting","total":51,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1616690728 - Will randomize all specs Will run 51 of 5737 specs Mar 25 16:45:30.208: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:45:30.211: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 25 16:45:30.330: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 25 16:45:30.439: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 25 16:45:30.439: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 25 16:45:30.439: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 25 16:45:30.457: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 25 16:45:30.457: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 25 16:45:30.457: INFO: e2e test version: v1.21.0-beta.1 Mar 25 16:45:30.459: INFO: kube-apiserver version: v1.21.0-alpha.0 Mar 25 16:45:30.459: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:45:30.463: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:45:30.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest Mar 25 16:45:30.673: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should provide unchanging, static URL paths for kubernetes api services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112 STEP: testing: /healthz STEP: testing: /api STEP: testing: /apis STEP: testing: /metrics STEP: testing: /openapi/v2 STEP: testing: /version STEP: testing: /logs [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:45:31.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-4468" for this suite. •{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":51,"completed":1,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should prevent NodePort collisions /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:45:31.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should prevent NodePort collisions /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440 STEP: creating service nodeport-collision-1 with type NodePort in namespace services-3033 STEP: creating service nodeport-collision-2 with conflicting NodePort STEP: deleting service nodeport-collision-1 to release NodePort STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort STEP: deleting service nodeport-collision-2 in namespace services-3033 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:45:32.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3033" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":51,"completed":2,"skipped":454,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for node-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:45:32.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for node-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 STEP: Performing setup for networking test in namespace nettest-7831 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 16:45:32.281: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 16:45:32.392: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:45:34.404: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:45:36.405: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:45:38.398: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:45:40.398: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:45:42.400: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:45:44.398: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:45:46.441: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:45:48.452: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 16:45:48.584: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 16:45:50.656: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 16:45:52.588: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 16:45:54.588: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 16:45:56.589: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 16:46:02.951: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 16:46:02.951: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 16:46:03.023: INFO: Service node-port-service in namespace nettest-7831 found. Mar 25 16:46:03.094: INFO: Service session-affinity-service in namespace nettest-7831 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 16:46:04.130: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 16:46:05.134: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(http) 172.18.0.17 (node) --> 10.96.206.89:80 (config.clusterIP) Mar 25 16:46:05.138: INFO: Going to poll 10.96.206.89 on port 80 at least 0 times, with a maximum of 34 tries before failing Mar 25 16:46:05.142: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.96.206.89:80/hostName | grep -v '^\s*$'] Namespace:nettest-7831 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:46:05.142: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:46:05.269: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 25 16:46:07.327: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.96.206.89:80/hostName | grep -v '^\s*$'] Namespace:nettest-7831 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:46:07.327: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:46:07.504: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 25 16:46:09.873: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.96.206.89:80/hostName | grep -v '^\s*$'] Namespace:nettest-7831 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:46:09.873: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:46:10.191: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1] STEP: dialing(http) 172.18.0.17 (node) --> 172.18.0.17:30414 (nodeIP) Mar 25 16:46:10.191: INFO: Going to poll 172.18.0.17 on port 30414 at least 0 times, with a maximum of 34 tries before failing Mar 25 16:46:11.051: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:30414/hostName | grep -v '^\s*$'] Namespace:nettest-7831 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:46:11.051: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:46:11.276: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-1]) Mar 25 16:46:13.394: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:30414/hostName | grep -v '^\s*$'] Namespace:nettest-7831 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:46:13.394: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:46:13.950: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-1]) Mar 25 16:46:16.023: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:30414/hostName | grep -v '^\s*$'] Namespace:nettest-7831 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:46:16.023: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:46:16.165: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-1]) Mar 25 16:46:18.171: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:30414/hostName | grep -v '^\s*$'] Namespace:nettest-7831 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:46:18.171: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:46:18.299: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:46:18.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-7831" for this suite. • [SLOW TEST:46.228 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for node-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for node-Service: http","total":51,"completed":3,"skipped":603,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] ESIPP [Slow] should handle updates to ExternalTrafficPolicy field /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095 [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:46:18.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 Mar 25 16:46:18.406: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:46:18.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-2375" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.107 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should handle updates to ExternalTrafficPolicy field [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Firewall rule should have correct firewall rules for e2e cluster /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204 [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:46:18.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename firewall-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61 Mar 25 16:46:18.519: INFO: Only supported for providers [gce] (not local) [AfterEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:46:18.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "firewall-test-1544" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.117 seconds] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have correct firewall rules for e2e cluster [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204 Only supported for providers [gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should release NodePorts on delete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:46:18.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should release NodePorts on delete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561 STEP: creating service nodeport-reuse with type NodePort in namespace services-5819 STEP: deleting original service nodeport-reuse Mar 25 16:46:18.956: INFO: Creating new host exec pod Mar 25 16:46:19.046: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:46:21.050: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:46:23.051: INFO: The status of Pod hostexec is Running (Ready = true) Mar 25 16:46:23.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-5819 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :32023' | tail -n +2 | grep LISTEN' Mar 25 16:46:30.169: INFO: stderr: "+ tail -n +2\n+ ss -ant46 'sport = :32023'\n+ grep LISTEN\n" Mar 25 16:46:30.169: INFO: stdout: "" STEP: creating service nodeport-reuse with same NodePort 32023 STEP: deleting service nodeport-reuse in namespace services-5819 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:46:30.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5819" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:11.820 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should release NodePorts on delete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561 ------------------------------ {"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":51,"completed":4,"skipped":728,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:492 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:46:30.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for service endpoints using hostNetwork /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:492 STEP: Performing setup for networking test in namespace nettest-4654 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 16:46:30.609: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 16:46:30.735: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:46:32.809: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:46:34.741: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:46:36.739: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:46:38.739: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:46:40.739: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:46:42.738: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:46:44.746: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:46:46.795: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:46:48.738: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:46:50.740: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 16:46:50.746: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 16:46:56.905: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 16:46:56.905: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 16:46:57.067: INFO: Service node-port-service in namespace nettest-4654 found. Mar 25 16:46:57.262: INFO: Service session-affinity-service in namespace nettest-4654 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 16:46:58.265: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 16:46:59.464: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: pod-Service(hostNetwork): http STEP: dialing(http) test-container-pod --> 10.96.193.161:80 (config.clusterIP) Mar 25 16:46:59.811: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.28:9080/dial?request=hostname&protocol=http&host=10.96.193.161&port=80&tries=1'] Namespace:nettest-4654 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:46:59.811: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:00.246: INFO: Waiting for responses: map[latest-worker:{}] Mar 25 16:47:02.256: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.28:9080/dial?request=hostname&protocol=http&host=10.96.193.161&port=80&tries=1'] Namespace:nettest-4654 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:02.256: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:02.396: INFO: Waiting for responses: map[latest-worker:{}] Mar 25 16:47:04.400: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.28:9080/dial?request=hostname&protocol=http&host=10.96.193.161&port=80&tries=1'] Namespace:nettest-4654 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:04.400: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:04.497: INFO: Waiting for responses: map[latest-worker:{}] Mar 25 16:47:06.501: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.28:9080/dial?request=hostname&protocol=http&host=10.96.193.161&port=80&tries=1'] Namespace:nettest-4654 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:06.502: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:06.628: INFO: Waiting for responses: map[] Mar 25 16:47:06.628: INFO: reached 10.96.193.161 after 3/34 tries STEP: dialing(http) test-container-pod --> 172.18.0.17:30777 (nodeIP) Mar 25 16:47:06.631: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.28:9080/dial?request=hostname&protocol=http&host=172.18.0.17&port=30777&tries=1'] Namespace:nettest-4654 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:06.631: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:06.787: INFO: Waiting for responses: map[latest-worker2:{}] Mar 25 16:47:08.798: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.28:9080/dial?request=hostname&protocol=http&host=172.18.0.17&port=30777&tries=1'] Namespace:nettest-4654 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:08.798: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:08.881: INFO: Waiting for responses: map[] Mar 25 16:47:08.881: INFO: reached 172.18.0.17 after 1/34 tries STEP: node-Service(hostNetwork): http STEP: dialing(http) 172.18.0.17 (node) --> 10.96.193.161:80 (config.clusterIP) Mar 25 16:47:08.881: INFO: Going to poll 10.96.193.161 on port 80 at least 0 times, with a maximum of 34 tries before failing Mar 25 16:47:08.884: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.96.193.161:80/hostName | grep -v '^\s*$'] Namespace:nettest-4654 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:08.884: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:08.997: INFO: Waiting for [latest-worker] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker2]) Mar 25 16:47:11.041: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.96.193.161:80/hostName | grep -v '^\s*$'] Namespace:nettest-4654 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:11.041: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:11.174: INFO: Found all 2 expected endpoints: [latest-worker latest-worker2] STEP: dialing(http) 172.18.0.17 (node) --> 172.18.0.17:30777 (nodeIP) Mar 25 16:47:11.174: INFO: Going to poll 172.18.0.17 on port 30777 at least 0 times, with a maximum of 34 tries before failing Mar 25 16:47:11.211: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:30777/hostName | grep -v '^\s*$'] Namespace:nettest-4654 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:11.211: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:11.315: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 25 16:47:13.321: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:30777/hostName | grep -v '^\s*$'] Namespace:nettest-4654 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:13.321: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:13.511: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 25 16:47:15.516: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:30777/hostName | grep -v '^\s*$'] Namespace:nettest-4654 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:15.516: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:15.625: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 25 16:47:17.630: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:30777/hostName | grep -v '^\s*$'] Namespace:nettest-4654 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:17.630: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:17.762: INFO: Found all 2 expected endpoints: [latest-worker latest-worker2] STEP: node-Service(hostNetwork): udp STEP: dialing(udp) 172.18.0.17 (node) --> 10.96.193.161:90 (config.clusterIP) Mar 25 16:47:17.762: INFO: Going to poll 10.96.193.161 on port 90 at least 0 times, with a maximum of 34 tries before failing Mar 25 16:47:17.776: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.193.161 90 | grep -v '^\s*$'] Namespace:nettest-4654 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:17.776: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:18.914: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 25 16:47:20.918: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.193.161 90 | grep -v '^\s*$'] Namespace:nettest-4654 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:20.918: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:22.028: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 25 16:47:24.033: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.193.161 90 | grep -v '^\s*$'] Namespace:nettest-4654 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:24.033: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:25.136: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 25 16:47:27.245: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.193.161 90 | grep -v '^\s*$'] Namespace:nettest-4654 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:27.245: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:28.361: INFO: Found all 2 expected endpoints: [latest-worker latest-worker2] STEP: dialing(udp) 172.18.0.17 (node) --> 172.18.0.17:30209 (nodeIP) Mar 25 16:47:28.361: INFO: Going to poll 172.18.0.17 on port 30209 at least 0 times, with a maximum of 34 tries before failing Mar 25 16:47:28.366: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30209 | grep -v '^\s*$'] Namespace:nettest-4654 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:28.366: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:29.470: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 25 16:47:31.475: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30209 | grep -v '^\s*$'] Namespace:nettest-4654 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:31.476: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:32.598: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 25 16:47:34.603: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30209 | grep -v '^\s*$'] Namespace:nettest-4654 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:34.603: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:35.737: INFO: Found all 2 expected endpoints: [latest-worker latest-worker2] STEP: handle large requests: http(hostNetwork) STEP: dialing(http) test-container-pod --> 10.96.193.161:80 (config.clusterIP) Mar 25 16:47:35.740: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.28:9080/dial?request=echo?msg=42424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242&protocol=http&host=10.96.193.161&port=80&tries=1'] Namespace:nettest-4654 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:35.740: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:35.858: INFO: Waiting for responses: map[] Mar 25 16:47:35.858: INFO: reached 10.96.193.161 after 0/34 tries STEP: handle large requests: udp(hostNetwork) STEP: dialing(udp) test-container-pod --> 10.96.193.161:90 (config.clusterIP) Mar 25 16:47:35.861: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.28:9080/dial?request=echo%20nooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo&protocol=udp&host=10.96.193.161&port=90&tries=1'] Namespace:nettest-4654 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:47:35.861: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:47:35.966: INFO: Waiting for responses: map[] Mar 25 16:47:35.966: INFO: reached 10.96.193.161 after 0/34 tries [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:47:35.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-4654" for this suite. • [SLOW TEST:65.622 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for service endpoints using hostNetwork /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:492 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork","total":51,"completed":5,"skipped":734,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for pod-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:47:35.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for pod-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153 STEP: Performing setup for networking test in namespace nettest-5884 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 16:47:36.201: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 16:47:36.334: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:47:38.549: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:47:40.339: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:47:42.407: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:47:44.537: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:47:46.340: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:47:48.338: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:47:50.339: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:47:52.341: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 16:47:52.347: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 16:47:54.351: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 16:47:56.352: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 16:48:02.394: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 16:48:02.394: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 16:48:02.554: INFO: Service node-port-service in namespace nettest-5884 found. Mar 25 16:48:02.680: INFO: Service session-affinity-service in namespace nettest-5884 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 16:48:03.696: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 16:48:04.700: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(http) test-container-pod --> 10.96.231.143:80 (config.clusterIP) Mar 25 16:48:04.707: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.175:9080/dial?request=hostname&protocol=http&host=10.96.231.143&port=80&tries=1'] Namespace:nettest-5884 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:48:04.707: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:48:04.845: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 16:48:06.850: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.175:9080/dial?request=hostname&protocol=http&host=10.96.231.143&port=80&tries=1'] Namespace:nettest-5884 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:48:06.850: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:48:06.955: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 16:48:08.960: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.175:9080/dial?request=hostname&protocol=http&host=10.96.231.143&port=80&tries=1'] Namespace:nettest-5884 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:48:08.960: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:48:09.057: INFO: Waiting for responses: map[] Mar 25 16:48:09.057: INFO: reached 10.96.231.143 after 2/34 tries STEP: dialing(http) test-container-pod --> 172.18.0.17:31668 (nodeIP) Mar 25 16:48:09.061: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.175:9080/dial?request=hostname&protocol=http&host=172.18.0.17&port=31668&tries=1'] Namespace:nettest-5884 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:48:09.061: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:48:09.180: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 16:48:11.195: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.175:9080/dial?request=hostname&protocol=http&host=172.18.0.17&port=31668&tries=1'] Namespace:nettest-5884 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:48:11.195: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:48:11.299: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 16:48:13.303: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.175:9080/dial?request=hostname&protocol=http&host=172.18.0.17&port=31668&tries=1'] Namespace:nettest-5884 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:48:13.303: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:48:13.415: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 16:48:15.419: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.175:9080/dial?request=hostname&protocol=http&host=172.18.0.17&port=31668&tries=1'] Namespace:nettest-5884 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:48:15.419: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:48:15.516: INFO: Waiting for responses: map[] Mar 25 16:48:15.516: INFO: reached 172.18.0.17 after 3/34 tries [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:48:15.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-5884" for this suite. • [SLOW TEST:39.604 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for pod-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: http","total":51,"completed":6,"skipped":773,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Provider:GCE] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:48:15.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Provider:GCE] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 Mar 25 16:48:16.218: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:48:16.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1144" for this suite. S [SKIPPING] [0.701 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Provider:GCE] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for pod-Service(hostNetwork): udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:473 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:48:16.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for pod-Service(hostNetwork): udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:473 Mar 25 16:48:16.495: INFO: skip because pods can not reach the endpoint in the same host if using UDP and hostNetwork #95565 [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:48:16.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-9005" for this suite. S [SKIPPING] [0.219 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for pod-Service(hostNetwork): udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:473 skip because pods can not reach the endpoint in the same host if using UDP and hostNetwork #95565 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:48:16.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for the cluster [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2571.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2571.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2571.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2571.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2571.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2571.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 25 16:48:26.913: INFO: DNS probes using dns-2571/dns-test-6e10c750-c212-4293-851a-a4ea693b531c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:48:26.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2571" for this suite. • [SLOW TEST:10.697 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for the cluster [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":51,"completed":7,"skipped":900,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should allow pods to hairpin back to themselves through services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:48:27.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should allow pods to hairpin back to themselves through services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986 STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-993 Mar 25 16:48:28.318: INFO: hairpin-test cluster ip: 10.96.168.116 STEP: creating a client/server pod Mar 25 16:48:29.149: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:48:31.288: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:48:33.366: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:48:35.179: INFO: The status of Pod hairpin is Running (Ready = true) STEP: waiting for the service to expose an endpoint STEP: waiting up to 3m0s for service hairpin-test in namespace services-993 to expose endpoints map[hairpin:[8080]] Mar 25 16:48:35.185: INFO: successfully validated that service hairpin-test in namespace services-993 exposes endpoints map[hairpin:[8080]] STEP: Checking if the pod can reach itself E0325 16:48:35.187268 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 16:48:36.482139 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 16:48:39.156517 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 16:48:44.922400 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 16:48:54.798963 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 16:49:10.474995 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 16:49:47.894015 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 25 16:50:35.187: FAIL: Unexpected error: <*errors.errorString | 0xc000cff480>: { s: "no subset of available IP address found for the endpoint hairpin-test within timeout 2m0s", } no subset of available IP address found for the endpoint hairpin-test within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.7() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1012 +0x6a5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002a94900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002a94900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002a94900, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-993". STEP: Found 4 events. Mar 25 16:50:35.192: INFO: At 2021-03-25 16:48:28 +0000 UTC - event for hairpin: {default-scheduler } Scheduled: Successfully assigned services-993/hairpin to latest-worker Mar 25 16:50:35.192: INFO: At 2021-03-25 16:48:31 +0000 UTC - event for hairpin: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 16:50:35.192: INFO: At 2021-03-25 16:48:34 +0000 UTC - event for hairpin: {kubelet latest-worker} Created: Created container agnhost-container Mar 25 16:50:35.192: INFO: At 2021-03-25 16:48:34 +0000 UTC - event for hairpin: {kubelet latest-worker} Started: Started container agnhost-container Mar 25 16:50:35.195: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 16:50:35.195: INFO: hairpin latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:48:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:48:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:48:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:48:28 +0000 UTC }] Mar 25 16:50:35.195: INFO: Mar 25 16:50:35.200: INFO: Logging node info for node latest-control-plane Mar 25 16:50:35.203: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1254275 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 16:49:38 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 16:49:38 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 16:49:38 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 16:49:38 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 16:50:35.203: INFO: Logging kubelet events for node latest-control-plane Mar 25 16:50:35.207: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 16:50:35.231: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.231: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 16:50:35.231: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.231: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 16:50:35.231: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.231: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 16:50:35.231: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.231: INFO: Container coredns ready: true, restart count 0 Mar 25 16:50:35.231: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.231: INFO: Container coredns ready: true, restart count 0 Mar 25 16:50:35.231: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.231: INFO: Container etcd ready: true, restart count 0 Mar 25 16:50:35.231: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.231: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 16:50:35.231: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.231: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 16:50:35.231: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.231: INFO: Container kube-apiserver ready: true, restart count 0 W0325 16:50:35.237098 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 16:50:35.316: INFO: Latency metrics for node latest-control-plane Mar 25 16:50:35.316: INFO: Logging node info for node latest-worker Mar 25 16:50:35.320: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1253600 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 15:45:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 15:56:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 16:47:03 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 16:46:57 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 16:46:57 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 16:46:57 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 16:46:57 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 16:50:35.321: INFO: Logging kubelet events for node latest-worker Mar 25 16:50:35.325: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 16:50:35.346: INFO: hairpin started at 2021-03-25 16:48:28 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.346: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 16:50:35.346: INFO: ss-1 started at 2021-03-25 16:49:49 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.346: INFO: Container webserver ready: true, restart count 0 Mar 25 16:50:35.346: INFO: ss-2 started at 2021-03-25 16:50:09 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.346: INFO: Container webserver ready: false, restart count 0 Mar 25 16:50:35.346: INFO: execpod-affinityvxpj5 started at 2021-03-25 16:49:44 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.346: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 16:50:35.346: INFO: kindnet-jmhgw started at 2021-03-25 12:24:39 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.346: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 16:50:35.346: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.346: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 16:50:35.346: INFO: affinity-clusterip-transition-7rgdt started at 2021-03-25 16:49:35 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.346: INFO: Container affinity-clusterip-transition ready: true, restart count 0 W0325 16:50:35.352695 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 16:50:35.485: INFO: Latency metrics for node latest-worker Mar 25 16:50:35.485: INFO: Logging node info for node latest-worker2 Mar 25 16:50:35.489: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1252905 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 15:38:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 15:56:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 16:46:57 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 16:46:57 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 16:46:57 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 16:46:57 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 16:50:35.490: INFO: Logging kubelet events for node latest-worker2 Mar 25 16:50:35.493: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 16:50:35.498: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.498: INFO: Container volume-tester ready: false, restart count 0 Mar 25 16:50:35.498: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.498: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 16:50:35.498: INFO: ss-0 started at 2021-03-25 16:49:22 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.498: INFO: Container webserver ready: true, restart count 0 Mar 25 16:50:35.498: INFO: kindnet-f7zk8 started at 2021-03-25 12:10:50 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.498: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 16:50:35.498: INFO: pod-ready started at 2021-03-25 16:50:27 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.498: INFO: Container pod-readiness-gate ready: true, restart count 0 Mar 25 16:50:35.498: INFO: affinity-clusterip-transition-mjhj9 started at 2021-03-25 16:49:35 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.498: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Mar 25 16:50:35.498: INFO: affinity-clusterip-transition-pn6ns started at 2021-03-25 16:49:35 +0000 UTC (0+1 container statuses recorded) Mar 25 16:50:35.498: INFO: Container affinity-clusterip-transition ready: true, restart count 0 W0325 16:50:35.503952 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 16:50:35.650: INFO: Latency metrics for node latest-worker2 Mar 25 16:50:35.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-993" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [128.456 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should allow pods to hairpin back to themselves through services [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986 Mar 25 16:50:35.187: Unexpected error: <*errors.errorString | 0xc000cff480>: { s: "no subset of available IP address found for the endpoint hairpin-test within timeout 2m0s", } no subset of available IP address found for the endpoint hairpin-test within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1012 ------------------------------ {"msg":"FAILED [sig-network] Services should allow pods to hairpin back to themselves through services","total":51,"completed":7,"skipped":956,"failed":1,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:50:35.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for endpoint-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242 STEP: Performing setup for networking test in namespace nettest-9710 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 16:50:35.863: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 16:50:35.916: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:50:37.921: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:50:39.922: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:50:41.965: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:50:43.922: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:50:45.921: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:50:47.921: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:50:49.923: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 16:50:49.929: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 16:50:51.935: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 16:50:53.934: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 16:50:57.966: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 16:50:57.967: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 16:50:58.068: INFO: Service node-port-service in namespace nettest-9710 found. Mar 25 16:50:58.133: INFO: Service session-affinity-service in namespace nettest-9710 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 16:50:59.177: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 16:51:00.182: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(http) netserver-0 (endpoint) --> 10.96.24.214:80 (config.clusterIP) Mar 25 16:51:00.191: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.48:8080/dial?request=hostname&protocol=http&host=10.96.24.214&port=80&tries=1'] Namespace:nettest-9710 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:51:00.191: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:51:00.320: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 16:51:02.326: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.48:8080/dial?request=hostname&protocol=http&host=10.96.24.214&port=80&tries=1'] Namespace:nettest-9710 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:51:02.326: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:51:02.451: INFO: Waiting for responses: map[] Mar 25 16:51:02.451: INFO: reached 10.96.24.214 after 1/34 tries STEP: dialing(http) netserver-0 (endpoint) --> 172.18.0.17:32006 (nodeIP) Mar 25 16:51:02.455: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.48:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32006&tries=1'] Namespace:nettest-9710 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:51:02.455: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:51:02.562: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 16:51:04.567: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.48:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32006&tries=1'] Namespace:nettest-9710 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:51:04.567: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:51:04.672: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 16:51:06.678: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.48:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32006&tries=1'] Namespace:nettest-9710 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:51:06.678: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:51:06.772: INFO: Waiting for responses: map[] Mar 25 16:51:06.772: INFO: reached 172.18.0.17 after 2/34 tries [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:51:06.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-9710" for this suite. • [SLOW TEST:31.120 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for endpoint-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http","total":51,"completed":8,"skipped":1097,"failed":1,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130 [BeforeEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:51:06.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename conntrack STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96 [It] should be able to preserve UDP traffic when server pod cycles for a NodePort service /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130 STEP: creating a UDP service svc-udp with type=NodePort in conntrack-9464 STEP: creating a client pod for probing the service svc-udp Mar 25 16:51:07.039: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:51:09.467: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:51:11.044: INFO: The status of Pod pod-client is Running (Ready = true) Mar 25 16:51:11.054: INFO: Pod client logs: Thu Mar 25 16:51:10 UTC 2021 Thu Mar 25 16:51:10 UTC 2021 Try: 1 Thu Mar 25 16:51:10 UTC 2021 Try: 2 Thu Mar 25 16:51:10 UTC 2021 Try: 3 Thu Mar 25 16:51:10 UTC 2021 Try: 4 Thu Mar 25 16:51:10 UTC 2021 Try: 5 Thu Mar 25 16:51:10 UTC 2021 Try: 6 Thu Mar 25 16:51:10 UTC 2021 Try: 7 STEP: creating a backend pod pod-server-1 for the service svc-udp Mar 25 16:51:11.065: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:51:13.103: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:51:15.071: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:51:17.071: INFO: The status of Pod pod-server-1 is Running (Ready = true) STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-9464 to expose endpoints map[pod-server-1:[80]] Mar 25 16:51:17.081: INFO: successfully validated that service svc-udp in namespace conntrack-9464 exposes endpoints map[pod-server-1:[80]] STEP: checking client pod connected to the backend 1 on Node IP 172.18.0.15 STEP: creating a second backend pod pod-server-2 for the service svc-udp Mar 25 16:51:27.113: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:51:29.117: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:51:31.118: INFO: The status of Pod pod-server-2 is Running (Ready = true) Mar 25 16:51:31.122: INFO: Cleaning up pod-server-1 pod Mar 25 16:51:31.161: INFO: Waiting for pod pod-server-1 to disappear Mar 25 16:51:31.210: INFO: Pod pod-server-1 no longer exists STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-9464 to expose endpoints map[pod-server-2:[80]] Mar 25 16:51:31.224: INFO: successfully validated that service svc-udp in namespace conntrack-9464 exposes endpoints map[pod-server-2:[80]] STEP: checking client pod connected to the backend 2 on Node IP 172.18.0.15 [AfterEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:51:41.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "conntrack-9464" for this suite. • [SLOW TEST:34.461 seconds] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to preserve UDP traffic when server pod cycles for a NodePort service /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130 ------------------------------ {"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":51,"completed":9,"skipped":1189,"failed":1,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203 [BeforeEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:51:41.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename conntrack STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96 [It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203 STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-559 STEP: creating a client pod for probing the service svc-udp Mar 25 16:51:41.458: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:51:43.463: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:51:45.498: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:51:47.558: INFO: The status of Pod pod-client is Running (Ready = true) Mar 25 16:51:47.809: INFO: Pod client logs: Thu Mar 25 16:51:44 UTC 2021 Thu Mar 25 16:51:44 UTC 2021 Try: 1 Thu Mar 25 16:51:44 UTC 2021 Try: 2 Thu Mar 25 16:51:44 UTC 2021 Try: 3 Thu Mar 25 16:51:44 UTC 2021 Try: 4 Thu Mar 25 16:51:44 UTC 2021 Try: 5 Thu Mar 25 16:51:44 UTC 2021 Try: 6 Thu Mar 25 16:51:44 UTC 2021 Try: 7 STEP: creating a backend pod pod-server-1 for the service svc-udp Mar 25 16:51:48.247: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:51:50.252: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:51:52.283: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:51:54.253: INFO: The status of Pod pod-server-1 is Running (Ready = true) STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-559 to expose endpoints map[pod-server-1:[80]] Mar 25 16:51:54.265: INFO: successfully validated that service svc-udp in namespace conntrack-559 exposes endpoints map[pod-server-1:[80]] STEP: checking client pod connected to the backend 1 on Node IP 172.18.0.15 STEP: creating a second backend pod pod-server-2 for the service svc-udp Mar 25 16:52:04.291: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:52:06.775: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:52:08.297: INFO: The status of Pod pod-server-2 is Running (Ready = true) Mar 25 16:52:08.300: INFO: Cleaning up pod-server-1 pod Mar 25 16:52:08.387: INFO: Waiting for pod pod-server-1 to disappear Mar 25 16:52:08.399: INFO: Pod pod-server-1 no longer exists STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-559 to expose endpoints map[pod-server-2:[80]] Mar 25 16:52:08.420: INFO: successfully validated that service svc-udp in namespace conntrack-559 exposes endpoints map[pod-server-2:[80]] STEP: checking client pod connected to the backend 2 on Node IP 172.18.0.15 [AfterEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:52:18.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "conntrack-559" for this suite. • [SLOW TEST:37.228 seconds] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to preserve UDP traffic when server pod cycles for a ClusterIP service /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203 ------------------------------ {"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":51,"completed":10,"skipped":1374,"failed":1,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should complete a service status lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2212 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:52:18.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2212 STEP: creating a Service STEP: watching for the Service to be added Mar 25 16:52:18.597: INFO: Found Service test-service-8hc5j in namespace services-8923 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] Mar 25 16:52:18.597: INFO: Service test-service-8hc5j created STEP: Getting /status Mar 25 16:52:18.615: INFO: Service test-service-8hc5j has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched Mar 25 16:52:18.638: INFO: observed Service test-service-8hc5j in namespace services-8923 with annotations: map[] & LoadBalancer: {[]} Mar 25 16:52:18.638: INFO: Found Service test-service-8hc5j in namespace services-8923 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Mar 25 16:52:18.638: INFO: Service test-service-8hc5j has service status patched STEP: updating the ServiceStatus Mar 25 16:52:18.643: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated Mar 25 16:52:18.645: INFO: Observed Service test-service-8hc5j in namespace services-8923 with annotations: map[] & Conditions: {[]} Mar 25 16:52:18.645: INFO: Observed event: &Service{ObjectMeta:{test-service-8hc5j services-8923 20a9f271-c04e-4e31-941a-17b47b65e25e 1255384 0 2021-03-25 16:52:18 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2021-03-25 16:52:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.96.74.39,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:nil,ClusterIPs:[10.96.74.39],IPFamilies:[],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Mar 25 16:52:18.645: INFO: Found Service test-service-8hc5j in namespace services-8923 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Mar 25 16:52:18.645: INFO: Service test-service-8hc5j has service status updated STEP: patching the service STEP: watching for the Service to be patched Mar 25 16:52:18.665: INFO: observed Service test-service-8hc5j in namespace services-8923 with labels: map[test-service-static:true] Mar 25 16:52:18.665: INFO: observed Service test-service-8hc5j in namespace services-8923 with labels: map[test-service-static:true] Mar 25 16:52:18.665: INFO: observed Service test-service-8hc5j in namespace services-8923 with labels: map[test-service-static:true] Mar 25 16:52:18.665: INFO: Found Service test-service-8hc5j in namespace services-8923 with labels: map[test-service:patched test-service-static:true] Mar 25 16:52:18.665: INFO: Service test-service-8hc5j patched STEP: deleting the service STEP: watching for the Service to be deleted Mar 25 16:52:18.712: INFO: Observed event: ADDED Mar 25 16:52:18.712: INFO: Observed event: MODIFIED Mar 25 16:52:18.712: INFO: Observed event: MODIFIED Mar 25 16:52:18.712: INFO: Observed event: MODIFIED Mar 25 16:52:18.712: INFO: Found Service test-service-8hc5j in namespace services-8923 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Mar 25 16:52:18.712: INFO: Service test-service-8hc5j deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:52:18.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8923" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •{"msg":"PASSED [sig-network] Services should complete a service status lifecycle","total":51,"completed":11,"skipped":1577,"failed":1,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:52:18.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for endpoint-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256 STEP: Performing setup for networking test in namespace nettest-9705 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 16:52:18.959: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 16:52:19.068: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:52:21.436: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:52:23.072: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:52:25.101: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:52:27.072: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:52:29.071: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:52:31.127: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:52:33.073: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:52:35.074: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:52:37.073: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:52:39.072: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:52:41.073: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 16:52:41.080: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 16:52:45.114: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 16:52:45.114: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 16:52:45.276: INFO: Service node-port-service in namespace nettest-9705 found. Mar 25 16:52:45.481: INFO: Service session-affinity-service in namespace nettest-9705 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 16:52:46.492: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 16:52:47.496: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(udp) netserver-0 (endpoint) --> 10.96.43.129:90 (config.clusterIP) Mar 25 16:52:47.503: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.57:8080/dial?request=hostname&protocol=udp&host=10.96.43.129&port=90&tries=1'] Namespace:nettest-9705 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:52:47.503: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:52:47.631: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 16:52:49.680: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.57:8080/dial?request=hostname&protocol=udp&host=10.96.43.129&port=90&tries=1'] Namespace:nettest-9705 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:52:49.680: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:52:49.799: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 16:52:51.804: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.57:8080/dial?request=hostname&protocol=udp&host=10.96.43.129&port=90&tries=1'] Namespace:nettest-9705 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:52:51.804: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:52:51.922: INFO: Waiting for responses: map[] Mar 25 16:52:51.922: INFO: reached 10.96.43.129 after 2/34 tries STEP: dialing(udp) netserver-0 (endpoint) --> 172.18.0.17:30003 (nodeIP) Mar 25 16:52:51.925: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.57:8080/dial?request=hostname&protocol=udp&host=172.18.0.17&port=30003&tries=1'] Namespace:nettest-9705 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:52:51.925: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:52:52.021: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 16:52:54.063: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.57:8080/dial?request=hostname&protocol=udp&host=172.18.0.17&port=30003&tries=1'] Namespace:nettest-9705 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:52:54.063: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:52:54.310: INFO: Waiting for responses: map[] Mar 25 16:52:54.310: INFO: reached 172.18.0.17 after 1/34 tries [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:52:54.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-9705" for this suite. • [SLOW TEST:35.625 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for endpoint-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp","total":51,"completed":12,"skipped":1631,"failed":1,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for node-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:52:54.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for node-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212 STEP: Performing setup for networking test in namespace nettest-4691 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 16:52:54.632: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 16:52:54.792: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:52:56.816: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:52:58.798: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:53:00.987: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:53:02.822: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:53:05.331: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:53:06.798: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:53:08.797: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:53:10.798: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 16:53:10.805: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 16:53:13.464: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 16:53:14.809: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 16:53:16.941: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 16:53:23.227: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 16:53:23.227: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 16:53:23.342: INFO: Service node-port-service in namespace nettest-4691 found. Mar 25 16:53:23.436: INFO: Service session-affinity-service in namespace nettest-4691 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 16:53:24.482: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 16:53:25.486: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(udp) 172.18.0.17 (node) --> 10.96.235.198:90 (config.clusterIP) Mar 25 16:53:25.491: INFO: Going to poll 10.96.235.198 on port 90 at least 0 times, with a maximum of 34 tries before failing Mar 25 16:53:25.494: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.235.198 90 | grep -v '^\s*$'] Namespace:nettest-4691 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:53:25.494: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:53:26.632: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 25 16:53:28.636: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.235.198 90 | grep -v '^\s*$'] Namespace:nettest-4691 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:53:28.636: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:53:29.749: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1] STEP: dialing(udp) 172.18.0.17 (node) --> 172.18.0.17:30015 (nodeIP) Mar 25 16:53:29.750: INFO: Going to poll 172.18.0.17 on port 30015 at least 0 times, with a maximum of 34 tries before failing Mar 25 16:53:29.753: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30015 | grep -v '^\s*$'] Namespace:nettest-4691 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:53:29.753: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:53:30.884: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-1]) Mar 25 16:53:32.991: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30015 | grep -v '^\s*$'] Namespace:nettest-4691 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:53:32.991: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:53:34.239: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:53:34.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-4691" for this suite. • [SLOW TEST:40.150 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for node-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for node-Service: udp","total":51,"completed":13,"skipped":1754,"failed":1,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:53:34.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for multiple endpoint-Services with same selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289 STEP: Performing setup for networking test in namespace nettest-9303 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 16:53:35.406: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 16:53:35.673: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:53:37.718: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:53:39.908: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 16:53:41.679: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:53:43.686: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:53:45.787: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:53:48.049: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:53:49.677: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:53:51.680: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 16:53:53.679: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 16:53:53.685: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 16:53:55.690: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 16:53:57.689: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 16:53:59.690: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 16:54:05.933: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 16:54:05.933: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 16:54:07.860: INFO: Service node-port-service in namespace nettest-9303 found. Mar 25 16:54:08.310: INFO: Service session-affinity-service in namespace nettest-9303 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 16:54:09.342: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 16:54:10.346: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: creating a second service with same selector Mar 25 16:54:10.744: INFO: Service second-node-port-service in namespace nettest-9303 found. Mar 25 16:54:11.746: INFO: Waiting for amount of service:second-node-port-service endpoints to be 2 STEP: dialing(http) netserver-0 (endpoint) --> 10.96.154.9:80 (config.clusterIP) Mar 25 16:54:12.074: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.68:8080/dial?request=hostname&protocol=http&host=10.96.154.9&port=80&tries=1'] Namespace:nettest-9303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:54:12.074: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:54:12.223: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 16:54:14.229: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.68:8080/dial?request=hostname&protocol=http&host=10.96.154.9&port=80&tries=1'] Namespace:nettest-9303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:54:14.229: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:54:14.373: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 16:54:16.620: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.68:8080/dial?request=hostname&protocol=http&host=10.96.154.9&port=80&tries=1'] Namespace:nettest-9303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:54:16.620: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:54:16.780: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 16:54:18.792: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.68:8080/dial?request=hostname&protocol=http&host=10.96.154.9&port=80&tries=1'] Namespace:nettest-9303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:54:18.792: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:54:18.912: INFO: Waiting for responses: map[] Mar 25 16:54:18.912: INFO: reached 10.96.154.9 after 3/34 tries STEP: dialing(http) netserver-0 (endpoint) --> 172.18.0.17:30530 (nodeIP) Mar 25 16:54:18.924: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.68:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=30530&tries=1'] Namespace:nettest-9303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:54:18.924: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:54:19.022: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 16:54:21.028: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.68:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=30530&tries=1'] Namespace:nettest-9303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:54:21.028: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:54:21.127: INFO: Waiting for responses: map[] Mar 25 16:54:21.127: INFO: reached 172.18.0.17 after 1/34 tries STEP: dialing(http) netserver-0 (endpoint) --> 10.96.143.250:80 (svc2.clusterIP) Mar 25 16:54:21.131: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.68:8080/dial?request=hostname&protocol=http&host=10.96.143.250&port=80&tries=1'] Namespace:nettest-9303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:54:21.131: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:54:21.223: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 16:54:23.228: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.68:8080/dial?request=hostname&protocol=http&host=10.96.143.250&port=80&tries=1'] Namespace:nettest-9303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:54:23.228: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:54:23.345: INFO: Waiting for responses: map[] Mar 25 16:54:23.345: INFO: reached 10.96.143.250 after 1/34 tries STEP: dialing(http) netserver-0 (endpoint) --> 172.18.0.17:31105 (nodeIP) Mar 25 16:54:23.350: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.68:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=31105&tries=1'] Namespace:nettest-9303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:54:23.350: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:54:23.454: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 16:54:25.459: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.68:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=31105&tries=1'] Namespace:nettest-9303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:54:25.459: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:54:25.557: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 16:54:27.562: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.68:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=31105&tries=1'] Namespace:nettest-9303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:54:27.562: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:54:27.670: INFO: Waiting for responses: map[] Mar 25 16:54:27.671: INFO: reached 172.18.0.17 after 2/34 tries STEP: deleting the original node port service STEP: dialing(http) netserver-0 (endpoint) --> 10.96.143.250:80 (svc2.clusterIP) Mar 25 16:54:42.859: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.68:8080/dial?request=hostname&protocol=http&host=10.96.143.250&port=80&tries=1'] Namespace:nettest-9303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:54:42.859: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:54:42.984: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 16:54:45.023: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.68:8080/dial?request=hostname&protocol=http&host=10.96.143.250&port=80&tries=1'] Namespace:nettest-9303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:54:45.023: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:54:45.142: INFO: Waiting for responses: map[] Mar 25 16:54:45.143: INFO: reached 10.96.143.250 after 1/34 tries STEP: dialing(http) netserver-0 (endpoint) --> 172.18.0.17:31105 (nodeIP) Mar 25 16:54:45.146: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.68:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=31105&tries=1'] Namespace:nettest-9303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:54:45.146: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:54:45.236: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 16:54:47.239: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.68:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=31105&tries=1'] Namespace:nettest-9303 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 16:54:47.240: INFO: >>> kubeConfig: /root/.kube/config Mar 25 16:54:47.345: INFO: Waiting for responses: map[] Mar 25 16:54:47.345: INFO: reached 172.18.0.17 after 1/34 tries [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:54:47.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-9303" for this suite. • [SLOW TEST:72.849 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for multiple endpoint-Services with same selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector","total":51,"completed":14,"skipped":1816,"failed":1,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] ESIPP [Slow] should only target nodes with endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959 [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:54:47.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 Mar 25 16:54:47.437: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 16:54:47.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-4725" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.099 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should only target nodes with endpoints [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 16:54:47.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91 Mar 25 16:54:47.704: INFO: (0) /api/v1/nodes/latest-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: http [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416
STEP: Performing setup for networking test in namespace nettest-2406
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 25 16:54:47.961: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 25 16:54:48.075: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 16:54:50.079: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 16:54:52.081: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 16:54:54.113: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:54:56.079: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:54:58.079: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:55:00.079: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:55:02.081: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:55:04.080: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 25 16:55:04.086: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 25 16:55:06.091: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 25 16:55:08.091: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 25 16:55:14.116: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 25 16:55:14.116: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 25 16:55:14.233: INFO: Service node-port-service in namespace nettest-2406 found.
Mar 25 16:55:14.309: INFO: Service session-affinity-service in namespace nettest-2406 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 25 16:55:15.339: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 25 16:55:16.343: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) test-container-pod --> 10.96.27.255:80
Mar 25 16:55:16.413: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.73:9080/dial?request=hostName&protocol=http&host=10.96.27.255&port=80&tries=1'] Namespace:nettest-2406 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:55:16.413: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:55:16.509: INFO: Tries: 10, in try: 0, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-2406, hostIp: 172.18.0.17, podIp: 10.244.2.73, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  }]" }
Mar 25 16:55:18.515: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.73:9080/dial?request=hostName&protocol=http&host=10.96.27.255&port=80&tries=1'] Namespace:nettest-2406 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:55:18.515: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:55:18.650: INFO: Tries: 10, in try: 1, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-2406, hostIp: 172.18.0.17, podIp: 10.244.2.73, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  }]" }
Mar 25 16:55:20.654: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.73:9080/dial?request=hostName&protocol=http&host=10.96.27.255&port=80&tries=1'] Namespace:nettest-2406 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:55:20.654: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:55:20.760: INFO: Tries: 10, in try: 2, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-2406, hostIp: 172.18.0.17, podIp: 10.244.2.73, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  }]" }
Mar 25 16:55:22.764: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.73:9080/dial?request=hostName&protocol=http&host=10.96.27.255&port=80&tries=1'] Namespace:nettest-2406 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:55:22.764: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:55:22.889: INFO: Tries: 10, in try: 3, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-2406, hostIp: 172.18.0.17, podIp: 10.244.2.73, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  }]" }
Mar 25 16:55:24.894: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.73:9080/dial?request=hostName&protocol=http&host=10.96.27.255&port=80&tries=1'] Namespace:nettest-2406 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:55:24.894: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:55:25.027: INFO: Tries: 10, in try: 4, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-2406, hostIp: 172.18.0.17, podIp: 10.244.2.73, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  }]" }
Mar 25 16:55:27.033: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.73:9080/dial?request=hostName&protocol=http&host=10.96.27.255&port=80&tries=1'] Namespace:nettest-2406 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:55:27.033: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:55:27.179: INFO: Tries: 10, in try: 5, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-2406, hostIp: 172.18.0.17, podIp: 10.244.2.73, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  }]" }
Mar 25 16:55:29.184: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.73:9080/dial?request=hostName&protocol=http&host=10.96.27.255&port=80&tries=1'] Namespace:nettest-2406 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:55:29.184: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:55:29.293: INFO: Tries: 10, in try: 6, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-2406, hostIp: 172.18.0.17, podIp: 10.244.2.73, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  }]" }
Mar 25 16:55:31.298: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.73:9080/dial?request=hostName&protocol=http&host=10.96.27.255&port=80&tries=1'] Namespace:nettest-2406 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:55:31.298: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:55:31.432: INFO: Tries: 10, in try: 7, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-2406, hostIp: 172.18.0.17, podIp: 10.244.2.73, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  }]" }
Mar 25 16:55:33.436: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.73:9080/dial?request=hostName&protocol=http&host=10.96.27.255&port=80&tries=1'] Namespace:nettest-2406 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:55:33.436: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:55:33.573: INFO: Tries: 10, in try: 8, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-2406, hostIp: 172.18.0.17, podIp: 10.244.2.73, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  }]" }
Mar 25 16:55:35.595: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.73:9080/dial?request=hostName&protocol=http&host=10.96.27.255&port=80&tries=1'] Namespace:nettest-2406 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:55:35.595: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:55:35.739: INFO: Tries: 10, in try: 9, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-2406, hostIp: 172.18.0.17, podIp: 10.244.2.73, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 16:55:08 +0000 UTC  }]" }
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 16:55:37.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2406" for this suite.

• [SLOW TEST:49.963 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: http [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]","total":51,"completed":16,"skipped":2326,"failed":1,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] NetworkPolicy API 
  should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196
[BeforeEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 16:55:37.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename networkpolicies
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Mar 25 16:55:38.649: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Mar 25 16:55:38.652: INFO: starting watch
STEP: patching
STEP: updating
Mar 25 16:55:38.677: INFO: waiting for watch events with expected annotations
Mar 25 16:55:38.677: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Mar 25 16:55:38.677: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 16:55:38.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-3511" for this suite.
•{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":51,"completed":17,"skipped":2506,"failed":1,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should check NodePort out-of-range
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 16:55:38.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should check NodePort out-of-range
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494
STEP: creating service nodeport-range-test with type NodePort in namespace services-6058
STEP: changing service nodeport-range-test to out-of-range NodePort 12228
STEP: deleting original service nodeport-range-test
STEP: creating service nodeport-range-test with out-of-range NodePort 12228
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 16:55:39.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6058" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
•{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":51,"completed":18,"skipped":2684,"failed":1,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 16:55:39.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-ba94a719-e087-41b5-ab57-68f203eb9c18]
STEP: Verifying pods for RC slow-terminating-unready-pod
Mar 25 16:55:39.317: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
Mar 25 16:55:43.593: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-ftcg8]: "NOW: 2021-03-25 16:55:43.592239194 +0000 UTC m=+1.582081254", 1 of 1 required successes so far
STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-6430.svc.cluster.local
Mar 25 16:55:43.593: INFO: Creating new exec pod
Mar 25 16:55:50.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-6430 exec execpod-xtdpb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6430.svc.cluster.local:80/'
Mar 25 16:55:50.329: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6430.svc.cluster.local:80/\n"
Mar 25 16:55:50.329: INFO: stdout: "NOW: 2021-03-25 16:55:50.317125248 +0000 UTC m=+8.306967343"
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-6430 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Mar 25 16:55:55.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-6430 exec execpod-xtdpb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6430.svc.cluster.local:80/; test "$?" -ne "0"'
Mar 25 16:55:56.679: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6430.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Mar 25 16:55:56.679: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Mar 25 16:55:56.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-6430 exec execpod-xtdpb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6430.svc.cluster.local:80/'
Mar 25 16:55:57.003: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6430.svc.cluster.local:80/\n"
Mar 25 16:55:57.003: INFO: stdout: "NOW: 2021-03-25 16:55:56.994802306 +0000 UTC m=+14.984644415"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-6430
STEP: deleting service tolerate-unready in namespace services-6430
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 16:55:57.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6430" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

• [SLOW TEST:18.343 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":51,"completed":19,"skipped":2718,"failed":1,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should update endpoints: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 16:55:57.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351
STEP: Performing setup for networking test in namespace nettest-6945
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 25 16:55:57.774: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 25 16:55:58.283: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 16:56:00.288: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 16:56:02.287: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 16:56:04.287: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:56:06.287: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:56:08.286: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:56:10.288: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:56:12.291: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:56:14.309: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:56:16.363: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:56:18.920: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 25 16:56:19.181: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 25 16:56:25.365: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 25 16:56:25.365: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 25 16:56:25.573: INFO: Service node-port-service in namespace nettest-6945 found.
Mar 25 16:56:26.381: INFO: Service session-affinity-service in namespace nettest-6945 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 25 16:56:27.427: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 25 16:56:28.430: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) test-container-pod --> 10.96.137.10:90 (config.clusterIP)
Mar 25 16:56:28.444: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:56:28.444: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:56:28.553: INFO: Waiting for responses: map[netserver-1:{}]
Mar 25 16:56:30.558: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:56:30.558: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:56:30.658: INFO: Waiting for responses: map[netserver-1:{}]
Mar 25 16:56:32.663: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:56:32.663: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:56:32.776: INFO: Waiting for responses: map[]
Mar 25 16:56:32.776: INFO: reached 10.96.137.10 after 2/34 tries
STEP: Deleting a pod which, will be replaced with a new endpoint
Mar 25 16:56:32.884: INFO: Waiting for pod netserver-0 to disappear
Mar 25 16:56:33.134: INFO: Pod netserver-0 no longer exists
Mar 25 16:56:34.136: INFO: Waiting for amount of service:node-port-service endpoints to be 1
STEP: dialing(udp) test-container-pod --> 10.96.137.10:90 (config.clusterIP) (endpoint recovery)
Mar 25 16:56:39.143: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:56:39.143: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:56:39.274: INFO: Waiting for responses: map[]
Mar 25 16:56:41.279: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:56:41.279: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:56:41.403: INFO: Waiting for responses: map[]
Mar 25 16:56:43.408: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:56:43.408: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:56:43.528: INFO: Waiting for responses: map[]
Mar 25 16:56:45.531: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:56:45.531: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:56:45.700: INFO: Waiting for responses: map[]
Mar 25 16:56:47.705: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:56:47.705: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:56:47.837: INFO: Waiting for responses: map[]
Mar 25 16:56:49.842: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:56:49.842: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:56:49.948: INFO: Waiting for responses: map[]
Mar 25 16:56:51.952: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:56:51.952: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:56:52.080: INFO: Waiting for responses: map[]
Mar 25 16:56:54.099: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:56:54.100: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:56:54.197: INFO: Waiting for responses: map[]
Mar 25 16:56:56.203: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:56:56.203: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:56:56.299: INFO: Waiting for responses: map[]
Mar 25 16:56:58.303: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:56:58.303: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:56:58.452: INFO: Waiting for responses: map[]
Mar 25 16:57:00.455: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:00.455: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:00.567: INFO: Waiting for responses: map[]
Mar 25 16:57:02.572: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:02.572: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:02.688: INFO: Waiting for responses: map[]
Mar 25 16:57:04.693: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:04.693: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:04.828: INFO: Waiting for responses: map[]
Mar 25 16:57:06.833: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:06.833: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:06.942: INFO: Waiting for responses: map[]
Mar 25 16:57:08.946: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:08.946: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:09.066: INFO: Waiting for responses: map[]
Mar 25 16:57:11.134: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:11.134: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:11.339: INFO: Waiting for responses: map[]
Mar 25 16:57:13.344: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:13.344: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:13.454: INFO: Waiting for responses: map[]
Mar 25 16:57:15.519: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:15.519: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:15.645: INFO: Waiting for responses: map[]
Mar 25 16:57:18.329: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:18.329: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:18.809: INFO: Waiting for responses: map[]
Mar 25 16:57:20.813: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:20.813: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:20.924: INFO: Waiting for responses: map[]
Mar 25 16:57:22.974: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:22.974: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:23.146: INFO: Waiting for responses: map[]
Mar 25 16:57:25.150: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:25.150: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:25.251: INFO: Waiting for responses: map[]
Mar 25 16:57:27.274: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:27.274: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:27.400: INFO: Waiting for responses: map[]
Mar 25 16:57:29.760: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:29.760: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:30.001: INFO: Waiting for responses: map[]
Mar 25 16:57:32.015: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:32.015: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:32.108: INFO: Waiting for responses: map[]
Mar 25 16:57:34.113: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:34.113: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:34.330: INFO: Waiting for responses: map[]
Mar 25 16:57:36.334: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:36.335: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:36.469: INFO: Waiting for responses: map[]
Mar 25 16:57:38.472: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:38.472: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:38.576: INFO: Waiting for responses: map[]
Mar 25 16:57:40.580: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:40.580: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:40.689: INFO: Waiting for responses: map[]
Mar 25 16:57:42.694: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:42.694: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:42.805: INFO: Waiting for responses: map[]
Mar 25 16:57:44.810: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:44.810: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:44.915: INFO: Waiting for responses: map[]
Mar 25 16:57:47.138: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:47.138: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:47.639: INFO: Waiting for responses: map[]
Mar 25 16:57:49.931: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:49.931: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:50.702: INFO: Waiting for responses: map[]
Mar 25 16:57:52.915: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.77:9080/dial?request=hostname&protocol=udp&host=10.96.137.10&port=90&tries=1'] Namespace:nettest-6945 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:57:52.915: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:57:53.200: INFO: Waiting for responses: map[]
Mar 25 16:57:53.200: INFO: reached 10.96.137.10 after 33/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 16:57:53.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6945" for this suite.

• [SLOW TEST:116.249 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: udp
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: udp","total":51,"completed":20,"skipped":2824,"failed":1,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Loadbalancing: L7 [Slow] Nginx 
  should conform to Ingress spec
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 16:57:53.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename ingress
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69
Mar 25 16:57:54.566: INFO: Found ClusterRoles; assuming RBAC is enabled.
[BeforeEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688
Mar 25 16:57:54.928: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706
STEP: No ingress created, no cleanup necessary
[AfterEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 16:57:54.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-1953" for this suite.

S [SKIPPING] in Spec Setup (BeforeEach) [1.671 seconds]
[sig-network] Loadbalancing: L7
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685
    should conform to Ingress spec [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722

    Only supported for providers [gce gke] (not local)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should update nodePort: udp [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 16:57:55.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: udp [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
STEP: Performing setup for networking test in namespace nettest-7227
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 25 16:57:56.384: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 25 16:57:56.874: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 16:57:59.150: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 16:58:01.185: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 16:58:03.035: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 16:58:04.936: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:58:06.886: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:58:08.916: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:58:10.889: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:58:12.880: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:58:14.880: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:58:16.878: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 16:58:18.879: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 25 16:58:18.884: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 25 16:58:24.952: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 25 16:58:24.952: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 25 16:58:25.017: INFO: Service node-port-service in namespace nettest-7227 found.
Mar 25 16:58:25.161: INFO: Service session-affinity-service in namespace nettest-7227 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 25 16:58:26.198: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 25 16:58:27.215: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) 172.18.0.17 (node) --> 172.18.0.17:30795 (nodeIP) and getting ALL host endpoints
Mar 25 16:58:27.228: INFO: Going to poll 172.18.0.17 on port 30795 at least 0 times, with a maximum of 34 tries before failing
Mar 25 16:58:27.230: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:58:27.230: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:58:28.326: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0])
Mar 25 16:58:30.330: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:58:30.330: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:58:31.431: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1]
STEP: Deleting the node port access point
STEP: dialing(udp) 172.18.0.17 (node) --> 172.18.0.17:30795 (nodeIP) and getting ZERO host endpoints
Mar 25 16:58:46.820: INFO: Going to poll 172.18.0.17 on port 30795 at least 34 times, with a maximum of 34 tries before failing
Mar 25 16:58:46.824: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:58:46.824: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:58:46.926: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:58:46.926: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:58:48.930: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:58:48.930: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:58:49.028: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:58:49.028: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:58:51.033: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:58:51.033: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:58:51.140: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:58:51.140: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:58:53.146: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:58:53.146: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:58:53.262: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:58:53.263: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:58:55.266: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:58:55.266: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:58:55.383: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:58:55.383: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:58:57.387: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:58:57.387: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:58:57.484: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:58:57.484: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:00.105: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:00.105: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:00.964: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:00.964: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:03.247: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:03.247: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:03.721: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:03.721: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:06.082: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:06.082: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:11.764: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:11.764: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:13.832: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:13.832: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:14.052: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:14.052: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:16.088: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:16.088: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:16.200: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:16.200: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:18.311: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:18.311: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:18.782: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:18.782: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:20.874: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:20.874: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:20.968: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:20.968: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:23.204: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:23.204: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:23.305: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:23.305: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:25.333: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:25.333: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:25.759: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:25.759: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:27.838: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:27.838: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:28.513: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:28.513: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:30.642: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:30.642: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:31.119: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:31.119: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:33.287: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:33.287: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:33.403: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:33.403: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:35.416: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:35.416: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:35.587: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:35.587: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:37.592: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:37.592: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:37.689: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:37.689: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:39.695: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:39.695: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:39.789: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:39.789: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:41.800: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:41.800: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:41.906: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:41.906: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:43.910: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:43.910: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:44.012: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:44.012: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:46.113: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:46.113: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:46.418: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:46.418: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:48.423: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:48.423: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:48.567: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:48.567: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:50.572: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:50.572: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:50.677: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:50.677: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:52.682: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:52.682: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:52.809: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:52.809: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:54.815: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:54.815: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:54.965: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:54.965: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:56.970: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:56.970: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:57.083: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:57.084: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 16:59:59.131: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 16:59:59.131: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 16:59:59.221: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 16:59:59.221: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:00:01.226: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:00:01.226: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:00:01.343: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:00:01.343: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:00:03.348: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:00:03.348: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:00:03.469: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:00:03.469: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:00:05.474: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:00:05.474: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:00:05.590: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:00:05.590: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:00:07.596: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\s*$'] Namespace:nettest-7227 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:00:07.596: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:00:07.737: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.17 30795 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:00:07.738: INFO: Found all 0 expected endpoints: []
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:00:07.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7227" for this suite.

• [SLOW TEST:132.364 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: udp [Slow]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","total":51,"completed":21,"skipped":2903,"failed":1,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Netpol API 
  should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48
[BeforeEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:00:07.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename netpol
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Mar 25 17:00:07.870: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Mar 25 17:00:07.876: INFO: starting watch
STEP: patching
STEP: updating
Mar 25 17:00:07.885: INFO: waiting for watch events with expected annotations
Mar 25 17:00:07.885: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Mar 25 17:00:07.885: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:00:07.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-8036" for this suite.
•{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":51,"completed":22,"skipped":3057,"failed":1,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:00:07.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
STEP: Preparing a test DNS service with injected DNS names...
Mar 25 17:00:08.110: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-571c1dc7-f4bc-403e-8f87-f5a8602815cb  dns-6190  f74c1ddb-8ab8-46e8-b8a0-fe9c18909d4c 1259755 0 2021-03-25 17:00:08 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2021-03-25 17:00:08 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-gqbcl,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:default-token-nxxtb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nxxtb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:default-token-nxxtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 25 17:00:14.248: INFO: testServerIP is 10.244.2.114
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Mar 25 17:00:14.251: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils  dns-6190  9469cf75-5716-4ec8-9875-78c565e492e1 1259823 0 2021-03-25 17:00:14 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2021-03-25 17:00:14 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nxxtb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nxxtb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nxxtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.2.114],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS option is configured on pod...
Mar 25 17:00:18.303: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-6190 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:00:18.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized name server and search path are working...
Mar 25 17:00:18.410: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-6190 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:00:18.410: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:00:18.523: INFO: Deleting pod e2e-dns-utils...
Mar 25 17:00:18.557: INFO: Deleting pod e2e-configmap-dns-server-571c1dc7-f4bc-403e-8f87-f5a8602815cb...
Mar 25 17:00:18.672: INFO: Deleting configmap e2e-coredns-configmap-gqbcl...
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:00:18.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6190" for this suite.

• [SLOW TEST:11.239 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":51,"completed":23,"skipped":3135,"failed":1,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:00:19.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
STEP: creating service externalip-test with type=clusterIP in namespace services-7161
STEP: creating replication controller externalip-test in namespace services-7161
I0325 17:00:20.162334       7 runners.go:190] Created replication controller with name: externalip-test, namespace: services-7161, replica count: 2
I0325 17:00:23.213650       7 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 17:00:26.213813       7 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 17:00:29.214425       7 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 17:00:32.215360       7 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 25 17:00:32.215: INFO: Creating new exec pod
E0325 17:00:36.240121       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 17:00:37.408737       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 17:00:40.078391       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 17:00:46.061977       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 17:00:58.802735       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 17:01:21.537106       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 17:01:49.098823       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
Mar 25 17:02:36.239: FAIL: Unexpected error:
    <*errors.errorString | 0xc002736030>: {
        s: "no subset of available IP address found for the endpoint externalip-test within timeout 2m0s",
    }
    no subset of available IP address found for the endpoint externalip-test within timeout 2m0s
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.12()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1201 +0x30f
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002a94900)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc002a94900)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc002a94900, 0x6d60740)
	/usr/local/go/src/testing/testing.go:1194 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1239 +0x2b3
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-7161".
STEP: Found 14 events.
Mar 25 17:02:36.245: INFO: At 2021-03-25 17:00:20 +0000 UTC - event for externalip-test: {replication-controller } SuccessfulCreate: Created pod: externalip-test-2lkct
Mar 25 17:02:36.245: INFO: At 2021-03-25 17:00:20 +0000 UTC - event for externalip-test: {replication-controller } SuccessfulCreate: Created pod: externalip-test-mdb59
Mar 25 17:02:36.245: INFO: At 2021-03-25 17:00:20 +0000 UTC - event for externalip-test-2lkct: {default-scheduler } Scheduled: Successfully assigned services-7161/externalip-test-2lkct to latest-worker
Mar 25 17:02:36.245: INFO: At 2021-03-25 17:00:20 +0000 UTC - event for externalip-test-mdb59: {default-scheduler } Scheduled: Successfully assigned services-7161/externalip-test-mdb59 to latest-worker2
Mar 25 17:02:36.245: INFO: At 2021-03-25 17:00:21 +0000 UTC - event for externalip-test-2lkct: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 25 17:02:36.245: INFO: At 2021-03-25 17:00:22 +0000 UTC - event for externalip-test-mdb59: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 25 17:02:36.245: INFO: At 2021-03-25 17:00:28 +0000 UTC - event for externalip-test-2lkct: {kubelet latest-worker} Created: Created container externalip-test
Mar 25 17:02:36.245: INFO: At 2021-03-25 17:00:28 +0000 UTC - event for externalip-test-mdb59: {kubelet latest-worker2} Created: Created container externalip-test
Mar 25 17:02:36.245: INFO: At 2021-03-25 17:00:29 +0000 UTC - event for externalip-test-2lkct: {kubelet latest-worker} Started: Started container externalip-test
Mar 25 17:02:36.245: INFO: At 2021-03-25 17:00:29 +0000 UTC - event for externalip-test-mdb59: {kubelet latest-worker2} Started: Started container externalip-test
Mar 25 17:02:36.245: INFO: At 2021-03-25 17:00:32 +0000 UTC - event for execpodnnffd: {default-scheduler } Scheduled: Successfully assigned services-7161/execpodnnffd to latest-worker2
Mar 25 17:02:36.245: INFO: At 2021-03-25 17:00:33 +0000 UTC - event for execpodnnffd: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 25 17:02:36.245: INFO: At 2021-03-25 17:00:34 +0000 UTC - event for execpodnnffd: {kubelet latest-worker2} Started: Started container agnhost-container
Mar 25 17:02:36.245: INFO: At 2021-03-25 17:00:34 +0000 UTC - event for execpodnnffd: {kubelet latest-worker2} Created: Created container agnhost-container
Mar 25 17:02:36.248: INFO: POD                    NODE            PHASE    GRACE  CONDITIONS
Mar 25 17:02:36.248: INFO: execpodnnffd           latest-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:00:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:00:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:00:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:00:32 +0000 UTC  }]
Mar 25 17:02:36.248: INFO: externalip-test-2lkct  latest-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:00:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:00:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:00:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:00:20 +0000 UTC  }]
Mar 25 17:02:36.248: INFO: externalip-test-mdb59  latest-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:00:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:00:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:00:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:00:20 +0000 UTC  }]
Mar 25 17:02:36.249: INFO: 
Mar 25 17:02:36.252: INFO: 
Logging node info for node latest-control-plane
Mar 25 17:02:36.255: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane    cc9ffc7a-24ee-4720-b82b-ca49361a1767 1259574 0 2021-03-22 08:06:26 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 16:59:39 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 16:59:39 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 16:59:39 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 16:59:39 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 25 17:02:36.255: INFO: 
Logging kubelet events for node latest-control-plane
Mar 25 17:02:36.257: INFO: 
Logging pods the kubelet thinks is on node latest-control-plane
Mar 25 17:02:36.278: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.278: INFO: 	Container etcd ready: true, restart count 0
Mar 25 17:02:36.278: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.278: INFO: 	Container kube-controller-manager ready: true, restart count 0
Mar 25 17:02:36.278: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.278: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 25 17:02:36.278: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.278: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 25 17:02:36.278: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.278: INFO: 	Container coredns ready: true, restart count 0
Mar 25 17:02:36.278: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.278: INFO: 	Container coredns ready: true, restart count 0
Mar 25 17:02:36.278: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.278: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar 25 17:02:36.278: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.278: INFO: 	Container kube-scheduler ready: true, restart count 0
Mar 25 17:02:36.278: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.278: INFO: 	Container local-path-provisioner ready: true, restart count 0
W0325 17:02:36.283819       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 25 17:02:36.363: INFO: 
Latency metrics for node latest-control-plane
Mar 25 17:02:36.363: INFO: 
Logging node info for node latest-worker
Mar 25 17:02:36.366: INFO: Node Info: &Node{ObjectMeta:{latest-worker    d799492c-1b1f-4258-b431-31204511a98f 1260243 0 2021-03-22 08:06:55 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/e2e-0ce89bcf-be28-49b7-8dee-5f6ab5510737:95 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 15:45:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 15:56:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 16:58:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/e2e-0ce89bcf-be28-49b7-8dee-5f6ab5510737":{}}},"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 17:01:59 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 17:01:59 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 17:01:59 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 17:01:59 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 25 17:02:36.367: INFO: 
Logging kubelet events for node latest-worker
Mar 25 17:02:36.369: INFO: 
Logging pods the kubelet thinks is on node latest-worker
Mar 25 17:02:36.388: INFO: kindnet-jmhgw started at 2021-03-25 12:24:39 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.388: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 25 17:02:36.388: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.388: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 25 17:02:36.388: INFO: suspend-false-to-true-mk7tm started at 2021-03-25 17:00:07 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.388: INFO: 	Container c ready: true, restart count 0
Mar 25 17:02:36.388: INFO: pod4 started at 2021-03-25 16:58:54 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.388: INFO: 	Container agnhost ready: true, restart count 0
Mar 25 17:02:36.388: INFO: suspend-false-to-true-pn4pk started at 2021-03-25 17:00:07 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.388: INFO: 	Container c ready: true, restart count 0
Mar 25 17:02:36.388: INFO: externalip-test-2lkct started at 2021-03-25 17:00:20 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.388: INFO: 	Container externalip-test ready: true, restart count 0
W0325 17:02:36.393300       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 25 17:02:36.509: INFO: 
Latency metrics for node latest-worker
Mar 25 17:02:36.509: INFO: 
Logging node info for node latest-worker2
Mar 25 17:02:36.511: INFO: Node Info: &Node{ObjectMeta:{latest-worker2    525d2fa2-95f1-4436-b726-c3866136dd3a 1260242 0 2021-03-22 08:06:55 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 15:38:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 15:56:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 17:01:59 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 17:01:59 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 17:01:59 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 17:01:59 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 25 17:02:36.512: INFO: 
Logging kubelet events for node latest-worker2
Mar 25 17:02:36.514: INFO: 
Logging pods the kubelet thinks is on node latest-worker2
Mar 25 17:02:36.533: INFO: kindnet-f7zk8 started at 2021-03-25 12:10:50 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.533: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 25 17:02:36.533: INFO: pod-submit-status-1-9 started at 2021-03-25 17:02:35 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.533: INFO: 	Container busybox ready: false, restart count 0
Mar 25 17:02:36.533: INFO: pod-submit-status-0-6 started at 2021-03-25 17:02:25 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.533: INFO: 	Container busybox ready: false, restart count 0
Mar 25 17:02:36.533: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.533: INFO: 	Container volume-tester ready: false, restart count 0
Mar 25 17:02:36.533: INFO: pod-submit-status-2-10 started at 2021-03-25 17:02:25 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.533: INFO: 	Container busybox ready: false, restart count 0
Mar 25 17:02:36.533: INFO: execpodnnffd started at 2021-03-25 17:00:32 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.533: INFO: 	Container agnhost-container ready: true, restart count 0
Mar 25 17:02:36.533: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.533: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 25 17:02:36.533: INFO: externalip-test-mdb59 started at 2021-03-25 17:00:20 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:02:36.533: INFO: 	Container externalip-test ready: true, restart count 0
W0325 17:02:36.537783       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 25 17:02:37.033: INFO: 
Latency metrics for node latest-worker2
Mar 25 17:02:37.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7161" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

• Failure [137.811 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177

  Mar 25 17:02:36.239: Unexpected error:
      <*errors.errorString | 0xc002736030>: {
          s: "no subset of available IP address found for the endpoint externalip-test within timeout 2m0s",
      }
      no subset of available IP address found for the endpoint externalip-test within timeout 2m0s
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1201
------------------------------
{"msg":"FAILED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":51,"completed":23,"skipped":3190,"failed":2,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:02:37.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
Mar 25 17:02:37.121: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:02:39.124: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:02:41.198: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:02:43.186: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:02:45.605: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Mar 25 17:02:45.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-8485 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Mar 25 17:02:50.346: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n"
Mar 25 17:02:50.346: INFO: stdout: "iptables"
Mar 25 17:02:50.346: INFO: proxyMode: iptables
Mar 25 17:02:50.433: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Mar 25 17:02:50.439: INFO: Pod kube-proxy-mode-detector no longer exists
STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-8485
Mar 25 17:02:50.724: INFO: sourceip-test cluster ip: 10.96.226.60
STEP: Picking 2 Nodes to test whether source IP is preserved or not
STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip
Mar 25 17:02:52.054: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:02:54.059: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:02:56.057: INFO: The status of Pod echo-sourceip is Running (Ready = true)
STEP: waiting up to 3m0s for service sourceip-test in namespace services-8485 to expose endpoints map[echo-sourceip:[8080]]
Mar 25 17:02:56.062: INFO: successfully validated that service sourceip-test in namespace services-8485 exposes endpoints map[echo-sourceip:[8080]]
STEP: Creating pause pod deployment
Mar 25 17:02:56.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Mar 25 17:02:58.305: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752288576, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752288576, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752288576, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752288576, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6996dfb859\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 25 17:03:00.087: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752288576, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752288576, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752288576, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752288576, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6996dfb859\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 25 17:03:02.093: INFO: Waiting up to 2m0s to get response from 10.96.226.60:8080
Mar 25 17:03:02.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-8485 exec pause-pod-6996dfb859-tlh8r -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.226.60:8080/clientip'
Mar 25 17:03:02.306: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.96.226.60:8080/clientip\n"
Mar 25 17:03:02.306: INFO: stdout: "10.244.1.21:50930"
STEP: Verifying the preserved source ip
Mar 25 17:03:02.306: INFO: Waiting up to 2m0s to get response from 10.96.226.60:8080
Mar 25 17:03:02.306: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-8485 exec pause-pod-6996dfb859-tt82z -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.226.60:8080/clientip'
Mar 25 17:03:02.488: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.96.226.60:8080/clientip\n"
Mar 25 17:03:02.488: INFO: stdout: "10.244.2.117:49022"
STEP: Verifying the preserved source ip
Mar 25 17:03:02.488: INFO: Deleting deployment
Mar 25 17:03:02.492: INFO: Cleaning up the echo server pod
Mar 25 17:03:02.542: INFO: Cleaning up the sourceip test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:03:02.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8485" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

• [SLOW TEST:26.082 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":51,"completed":24,"skipped":3411,"failed":2,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] ESIPP [Slow] 
  should work for type=NodePort
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:03:03.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Mar 25 17:03:03.768: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:03:03.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-4598" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866

S [SKIPPING] in Spec Setup (BeforeEach) [0.813 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work for type=NodePort [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:03:03.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85
Mar 25 17:03:04.769: INFO: (0) /api/v1/nodes/latest-worker2:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
STEP: Performing setup for networking test in namespace nettest-5775
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 25 17:03:05.329: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 25 17:03:05.583: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:03:07.586: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:03:09.611: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:03:11.586: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:03:13.586: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:03:15.586: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:03:17.587: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:03:19.587: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:03:21.587: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:03:23.586: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:03:25.648: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:03:29.159: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:03:29.798: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:03:31.606: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 25 17:03:31.611: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 25 17:03:37.885: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 25 17:03:37.885: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 25 17:03:38.589: INFO: Service node-port-service in namespace nettest-5775 found.
Mar 25 17:03:39.111: INFO: Service session-affinity-service in namespace nettest-5775 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 25 17:03:40.174: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 25 17:03:41.179: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) test-container-pod --> 10.96.207.180:80 (config.clusterIP)
Mar 25 17:03:41.187: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.26:9080/dial?request=echo?msg=42424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242&protocol=http&host=10.96.207.180&port=80&tries=1'] Namespace:nettest-5775 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:03:41.187: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:03:41.317: INFO: Waiting for responses: map[]
Mar 25 17:03:41.317: INFO: reached 10.96.207.180 after 0/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:03:41.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5775" for this suite.

• [SLOW TEST:36.438 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: http
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: http","total":51,"completed":26,"skipped":3824,"failed":2,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] KubeProxy 
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:03:41.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kube-proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
Mar 25 17:03:41.513: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:03:43.516: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:03:45.517: INFO: The status of Pod e2e-net-exec is Running (Ready = true)
STEP: Launching a server daemon on node latest-worker2 (node ip: 172.18.0.15, image: k8s.gcr.io/e2e-test-images/agnhost:2.28)
Mar 25 17:03:45.534: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:03:47.600: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:03:49.537: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:03:51.538: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:03:53.539: INFO: The status of Pod e2e-net-server is Running (Ready = true)
STEP: Launching a client connection on node latest-worker (node ip: 172.18.0.17, image: k8s.gcr.io/e2e-test-images/agnhost:2.28)
Mar 25 17:03:55.553: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:03:57.558: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:03:59.557: INFO: The status of Pod e2e-net-client is Running (Ready = true)
STEP: Checking conntrack entries for the timeout
Mar 25 17:03:59.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kube-proxy-8272 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 172.18.0.15 | grep -m 1 'CLOSE_WAIT.*dport=11302' '
Mar 25 17:03:59.804: INFO: stderr: "+ grep -m 1 CLOSE_WAIT.*dport=11302\n+ conntrack -L -f ipv4 -d 172.18.0.15\nconntrack v1.4.5 (conntrack-tools): 1 flow entries have been shown.\n"
Mar 25 17:03:59.804: INFO: stdout: "tcp      6 3598 CLOSE_WAIT src=10.244.2.119 dst=172.18.0.15 sport=37412 dport=11302 src=172.18.0.15 dst=172.18.0.17 sport=11302 dport=37412 [ASSURED] mark=0 use=1\n"
Mar 25 17:03:59.804: INFO: conntrack entry for node 172.18.0.15 and port 11302:  tcp      6 3598 CLOSE_WAIT src=10.244.2.119 dst=172.18.0.15 sport=37412 dport=11302 src=172.18.0.15 dst=172.18.0.17 sport=11302 dport=37412 [ASSURED] mark=0 use=1

[AfterEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:03:59.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kube-proxy-8272" for this suite.

• [SLOW TEST:18.470 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":51,"completed":27,"skipped":3923,"failed":2,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:03:59.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
STEP: creating service-headless in namespace services-482
STEP: creating service service-headless in namespace services-482
STEP: creating replication controller service-headless in namespace services-482
I0325 17:03:59.973148       7 runners.go:190] Created replication controller with name: service-headless, namespace: services-482, replica count: 3
I0325 17:04:03.024685       7 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 17:04:06.024947       7 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-482
STEP: creating service service-headless-toggled in namespace services-482
STEP: creating replication controller service-headless-toggled in namespace services-482
I0325 17:04:07.027743       7 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-482, replica count: 3
I0325 17:04:10.079673       7 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 17:04:13.080981       7 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 17:04:16.082063       7 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Mar 25 17:04:16.089: INFO: Creating new host exec pod
Mar 25 17:04:16.094: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:04:18.117: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:04:20.099: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Mar 25 17:04:20.099: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Mar 25 17:04:24.254: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.12.1:80 2>&1 || true; echo; done" in pod services-482/verify-service-up-host-exec-pod
Mar 25 17:04:24.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-482 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.12.1:80 2>&1 || true; echo; done'
Mar 25 17:04:24.839: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n"
Mar 25 17:04:24.839: INFO: stdout: "service-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\n"
Mar 25 17:04:24.839: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.12.1:80 2>&1 || true; echo; done" in pod services-482/verify-service-up-exec-pod-tq8cx
Mar 25 17:04:24.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-482 exec verify-service-up-exec-pod-tq8cx -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.12.1:80 2>&1 || true; echo; done'
Mar 25 17:04:25.179: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n"
Mar 25 17:04:25.179: INFO: stdout: "service-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-482
STEP: Deleting pod verify-service-up-exec-pod-tq8cx in namespace services-482
STEP: verifying service-headless is not up
Mar 25 17:04:26.394: INFO: Creating new host exec pod
Mar 25 17:04:26.928: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:04:29.049: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:04:31.653: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:04:33.386: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:04:34.935: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Mar 25 17:04:34.935: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-482 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.174.32:80 && echo service-down-failed'
Mar 25 17:04:37.134: INFO: rc: 28
Mar 25 17:04:37.134: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.174.32:80 && echo service-down-failed" in pod services-482/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-482 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.174.32:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.96.174.32:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-482
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
Mar 25 17:04:37.473: INFO: Creating new host exec pod
Mar 25 17:04:38.233: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:04:40.937: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:04:42.503: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:04:44.349: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:04:46.379: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Mar 25 17:04:46.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-482 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.12.1:80 && echo service-down-failed'
Mar 25 17:04:48.619: INFO: rc: 28
Mar 25 17:04:48.619: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.12.1:80 && echo service-down-failed" in pod services-482/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-482 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.12.1:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.96.12.1:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-482
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
Mar 25 17:04:48.811: INFO: Creating new host exec pod
Mar 25 17:04:49.871: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:04:51.979: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:04:53.912: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:04:56.100: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:04:58.553: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:05:00.533: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Mar 25 17:05:00.533: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Mar 25 17:05:08.852: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.12.1:80 2>&1 || true; echo; done" in pod services-482/verify-service-up-host-exec-pod
Mar 25 17:05:08.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-482 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.12.1:80 2>&1 || true; echo; done'
Mar 25 17:05:09.209: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n"
Mar 25 17:05:09.209: INFO: stdout: "service-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\n"
Mar 25 17:05:09.210: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.12.1:80 2>&1 || true; echo; done" in pod services-482/verify-service-up-exec-pod-4wl7h
Mar 25 17:05:09.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-482 exec verify-service-up-exec-pod-4wl7h -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.12.1:80 2>&1 || true; echo; done'
Mar 25 17:05:09.555: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.12.1:80\n+ echo\n"
Mar 25 17:05:09.555: INFO: stdout: "service-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-sq6s2\nservice-headless-toggled-gcz8q\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\nservice-headless-toggled-gcz8q\nservice-headless-toggled-sq6s2\nservice-headless-toggled-sq6s2\nservice-headless-toggled-ggt2t\nservice-headless-toggled-gcz8q\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-482
STEP: Deleting pod verify-service-up-exec-pod-4wl7h in namespace services-482
STEP: verifying service-headless is still not up
Mar 25 17:05:09.779: INFO: Creating new host exec pod
Mar 25 17:05:10.535: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:05:12.539: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:05:14.539: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:05:17.128: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Mar 25 17:05:17.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-482 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.174.32:80 && echo service-down-failed'
Mar 25 17:05:19.406: INFO: rc: 28
Mar 25 17:05:19.406: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.174.32:80 && echo service-down-failed" in pod services-482/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-482 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.174.32:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.96.174.32:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-482
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:05:19.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-482" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

• [SLOW TEST:79.662 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":51,"completed":28,"skipped":4077,"failed":2,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should update endpoints: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:05:19.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
STEP: Performing setup for networking test in namespace nettest-2541
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 25 17:05:20.356: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 25 17:05:20.487: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:05:22.490: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:05:24.792: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:05:26.559: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:05:28.517: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:05:30.643: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:05:32.490: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:05:34.489: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:05:37.253: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:05:39.105: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:05:41.352: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:05:42.831: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:05:45.331: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:05:46.767: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:05:49.003: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 25 17:05:49.013: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 25 17:06:00.047: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 25 17:06:00.047: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 25 17:06:00.278: INFO: Service node-port-service in namespace nettest-2541 found.
Mar 25 17:06:02.250: INFO: Service session-affinity-service in namespace nettest-2541 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 25 17:06:03.446: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 25 17:06:04.449: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) test-container-pod --> 10.96.139.71:80 (config.clusterIP)
Mar 25 17:06:04.460: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:04.460: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:04.574: INFO: Waiting for responses: map[netserver-1:{}]
Mar 25 17:06:06.579: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:06.579: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:06.683: INFO: Waiting for responses: map[netserver-1:{}]
Mar 25 17:06:10.114: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:10.114: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:10.204: INFO: Waiting for responses: map[netserver-1:{}]
Mar 25 17:06:12.208: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:12.208: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:12.299: INFO: Waiting for responses: map[netserver-1:{}]
Mar 25 17:06:14.304: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:14.304: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:14.389: INFO: Waiting for responses: map[netserver-1:{}]
Mar 25 17:06:16.393: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:16.393: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:16.492: INFO: Waiting for responses: map[netserver-1:{}]
Mar 25 17:06:18.497: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:18.497: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:18.627: INFO: Waiting for responses: map[]
Mar 25 17:06:18.627: INFO: reached 10.96.139.71 after 6/34 tries
STEP: Deleting a pod which, will be replaced with a new endpoint
Mar 25 17:06:21.076: INFO: Waiting for pod netserver-0 to disappear
Mar 25 17:06:21.276: INFO: Pod netserver-0 no longer exists
Mar 25 17:06:22.277: INFO: Waiting for amount of service:node-port-service endpoints to be 1
STEP: dialing(http) test-container-pod --> 10.96.139.71:80 (config.clusterIP) (endpoint recovery)
Mar 25 17:06:27.285: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:27.285: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:27.380: INFO: Waiting for responses: map[]
Mar 25 17:06:29.384: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:29.384: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:29.519: INFO: Waiting for responses: map[]
Mar 25 17:06:32.124: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:32.124: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:34.521: INFO: Waiting for responses: map[]
Mar 25 17:06:36.704: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:36.704: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:37.152: INFO: Waiting for responses: map[]
Mar 25 17:06:39.156: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:39.156: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:39.384: INFO: Waiting for responses: map[]
Mar 25 17:06:41.959: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:41.959: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:42.652: INFO: Waiting for responses: map[]
Mar 25 17:06:44.701: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:44.701: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:44.804: INFO: Waiting for responses: map[]
Mar 25 17:06:47.169: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:47.169: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:47.285: INFO: Waiting for responses: map[]
Mar 25 17:06:50.251: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:50.251: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:50.380: INFO: Waiting for responses: map[]
Mar 25 17:06:52.392: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:52.392: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:52.512: INFO: Waiting for responses: map[]
Mar 25 17:06:54.871: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:54.871: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:55.205: INFO: Waiting for responses: map[]
Mar 25 17:06:57.296: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:57.296: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:57.407: INFO: Waiting for responses: map[]
Mar 25 17:06:59.412: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:06:59.412: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:06:59.507: INFO: Waiting for responses: map[]
Mar 25 17:07:01.514: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:01.514: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:01.639: INFO: Waiting for responses: map[]
Mar 25 17:07:03.643: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:03.644: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:03.726: INFO: Waiting for responses: map[]
Mar 25 17:07:05.794: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:05.794: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:05.923: INFO: Waiting for responses: map[]
Mar 25 17:07:08.491: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:08.491: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:08.587: INFO: Waiting for responses: map[]
Mar 25 17:07:10.758: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:10.758: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:10.890: INFO: Waiting for responses: map[]
Mar 25 17:07:12.892: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:12.892: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:13.015: INFO: Waiting for responses: map[]
Mar 25 17:07:15.025: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:15.025: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:15.171: INFO: Waiting for responses: map[]
Mar 25 17:07:17.590: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:17.590: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:17.879: INFO: Waiting for responses: map[]
Mar 25 17:07:19.901: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:19.901: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:20.002: INFO: Waiting for responses: map[]
Mar 25 17:07:22.015: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:22.015: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:22.106: INFO: Waiting for responses: map[]
Mar 25 17:07:24.111: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:24.111: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:24.213: INFO: Waiting for responses: map[]
Mar 25 17:07:26.315: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:26.315: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:26.513: INFO: Waiting for responses: map[]
Mar 25 17:07:28.518: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:28.518: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:28.623: INFO: Waiting for responses: map[]
Mar 25 17:07:30.627: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:30.627: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:30.712: INFO: Waiting for responses: map[]
Mar 25 17:07:32.752: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:32.752: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:32.838: INFO: Waiting for responses: map[]
Mar 25 17:07:35.213: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:35.213: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:35.573: INFO: Waiting for responses: map[]
Mar 25 17:07:37.782: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:37.782: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:37.967: INFO: Waiting for responses: map[]
Mar 25 17:07:40.315: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:40.315: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:41.077: INFO: Waiting for responses: map[]
Mar 25 17:07:43.956: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:43.957: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:44.393: INFO: Waiting for responses: map[]
Mar 25 17:07:47.188: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:47.188: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:47.615: INFO: Waiting for responses: map[]
Mar 25 17:07:49.659: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.138:9080/dial?request=hostname&protocol=http&host=10.96.139.71&port=80&tries=1'] Namespace:nettest-2541 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:07:49.659: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:07:49.990: INFO: Waiting for responses: map[]
Mar 25 17:07:49.990: INFO: reached 10.96.139.71 after 33/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:07:49.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2541" for this suite.

• [SLOW TEST:150.850 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: http
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: http","total":51,"completed":29,"skipped":4416,"failed":2,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] ESIPP [Slow] 
  should work from pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:07:50.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Mar 25 17:07:51.064: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:07:51.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-3835" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866

S [SKIPPING] in Spec Setup (BeforeEach) [1.129 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work from pods [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Conntrack 
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:07:51.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
Mar 25 17:07:55.027: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:07:58.022: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:07:59.107: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:08:01.513: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:08:03.838: INFO: The status of Pod boom-server is Running (Ready = true)
STEP: Server pod created on node latest-worker2
STEP: Server service created
Mar 25 17:08:04.535: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:08:07.212: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:08:09.115: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:08:10.542: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:08:14.249: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:08:16.490: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:08:17.932: INFO: The status of Pod startup-script is Running (Ready = true)
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
Mar 25 17:09:18.816: INFO: boom-server pod logs: 2021/03/25 17:08:00 external ip: 10.244.1.49
2021/03/25 17:08:00 listen on 0.0.0.0:9000
2021/03/25 17:08:00 probing 10.244.1.49
2021/03/25 17:08:14 tcp packet: &{SrcPort:38733 DestPort:9000 Seq:901626151 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:14 tcp packet: &{SrcPort:38733 DestPort:9000 Seq:901626152 Ack:1729980443 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:14 connection established
2021/03/25 17:08:14 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 151 77 103 27 225 123 53 189 185 40 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:14 checksumer: &{sum:444746 oddByte:33 length:39}
2021/03/25 17:08:14 ret:  444779
2021/03/25 17:08:14 ret:  51569
2021/03/25 17:08:14 ret:  51569
2021/03/25 17:08:14 boom packet injected
2021/03/25 17:08:14 tcp packet: &{SrcPort:38733 DestPort:9000 Seq:901626152 Ack:1729980443 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:16 tcp packet: &{SrcPort:34711 DestPort:9000 Seq:1014699962 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:16 tcp packet: &{SrcPort:34711 DestPort:9000 Seq:1014699963 Ack:2266122258 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:16 connection established
2021/03/25 17:08:16 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 135 151 135 16 193 114 60 123 23 187 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:16 checksumer: &{sum:479135 oddByte:33 length:39}
2021/03/25 17:08:16 ret:  479168
2021/03/25 17:08:16 ret:  20423
2021/03/25 17:08:16 ret:  20423
2021/03/25 17:08:16 boom packet injected
2021/03/25 17:08:16 tcp packet: &{SrcPort:34711 DestPort:9000 Seq:1014699963 Ack:2266122258 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:18 tcp packet: &{SrcPort:46783 DestPort:9000 Seq:854832608 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:18 tcp packet: &{SrcPort:46783 DestPort:9000 Seq:854832609 Ack:2396114387 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:18 connection established
2021/03/25 17:08:18 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 182 191 142 208 71 51 50 243 181 225 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:18 checksumer: &{sum:562927 oddByte:33 length:39}
2021/03/25 17:08:18 ret:  562960
2021/03/25 17:08:18 ret:  38680
2021/03/25 17:08:18 ret:  38680
2021/03/25 17:08:18 boom packet injected
2021/03/25 17:08:18 tcp packet: &{SrcPort:46783 DestPort:9000 Seq:854832609 Ack:2396114387 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:20 tcp packet: &{SrcPort:38769 DestPort:9000 Seq:4021695709 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:20 tcp packet: &{SrcPort:38769 DestPort:9000 Seq:4021695710 Ack:1716134172 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:20 connection established
2021/03/25 17:08:20 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 151 113 102 72 154 124 239 182 52 222 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:20 checksumer: &{sum:510519 oddByte:33 length:39}
2021/03/25 17:08:20 ret:  510552
2021/03/25 17:08:20 ret:  51807
2021/03/25 17:08:20 ret:  51807
2021/03/25 17:08:20 boom packet injected
2021/03/25 17:08:20 tcp packet: &{SrcPort:38769 DestPort:9000 Seq:4021695710 Ack:1716134172 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:22 tcp packet: &{SrcPort:36641 DestPort:9000 Seq:1194490265 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:22 tcp packet: &{SrcPort:36641 DestPort:9000 Seq:1194490266 Ack:2652621578 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:22 connection established
2021/03/25 17:08:22 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 143 33 158 26 68 106 71 50 121 154 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:22 checksumer: &{sum:422318 oddByte:33 length:39}
2021/03/25 17:08:22 ret:  422351
2021/03/25 17:08:22 ret:  29141
2021/03/25 17:08:22 ret:  29141
2021/03/25 17:08:22 boom packet injected
2021/03/25 17:08:22 tcp packet: &{SrcPort:36641 DestPort:9000 Seq:1194490266 Ack:2652621578 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:24 tcp packet: &{SrcPort:38733 DestPort:9000 Seq:901626153 Ack:1729980444 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:24 tcp packet: &{SrcPort:41639 DestPort:9000 Seq:3445586116 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:24 tcp packet: &{SrcPort:41639 DestPort:9000 Seq:3445586117 Ack:2325193819 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:24 connection established
2021/03/25 17:08:24 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 162 167 138 150 29 187 205 95 120 197 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:24 checksumer: &{sum:531723 oddByte:33 length:39}
2021/03/25 17:08:24 ret:  531756
2021/03/25 17:08:24 ret:  7476
2021/03/25 17:08:24 ret:  7476
2021/03/25 17:08:24 boom packet injected
2021/03/25 17:08:24 tcp packet: &{SrcPort:41639 DestPort:9000 Seq:3445586117 Ack:2325193819 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:26 tcp packet: &{SrcPort:34711 DestPort:9000 Seq:1014699964 Ack:2266122259 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:26 tcp packet: &{SrcPort:43561 DestPort:9000 Seq:3080572069 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:26 tcp packet: &{SrcPort:43561 DestPort:9000 Seq:3080572070 Ack:4271857571 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:26 connection established
2021/03/25 17:08:26 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 170 41 254 157 217 3 183 157 204 166 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:26 checksumer: &{sum:462465 oddByte:33 length:39}
2021/03/25 17:08:26 ret:  462498
2021/03/25 17:08:26 ret:  3753
2021/03/25 17:08:26 ret:  3753
2021/03/25 17:08:26 boom packet injected
2021/03/25 17:08:26 tcp packet: &{SrcPort:43561 DestPort:9000 Seq:3080572070 Ack:4271857571 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:28 tcp packet: &{SrcPort:46783 DestPort:9000 Seq:854832610 Ack:2396114388 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:28 tcp packet: &{SrcPort:35529 DestPort:9000 Seq:472935178 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:28 tcp packet: &{SrcPort:35529 DestPort:9000 Seq:472935179 Ack:923323091 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:28 connection established
2021/03/25 17:08:28 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 138 201 55 7 68 51 28 48 107 11 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:28 checksumer: &{sum:409097 oddByte:33 length:39}
2021/03/25 17:08:28 ret:  409130
2021/03/25 17:08:28 ret:  15920
2021/03/25 17:08:28 ret:  15920
2021/03/25 17:08:28 boom packet injected
2021/03/25 17:08:28 tcp packet: &{SrcPort:35529 DestPort:9000 Seq:472935179 Ack:923323091 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:30 tcp packet: &{SrcPort:38769 DestPort:9000 Seq:4021695711 Ack:1716134173 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:30 tcp packet: &{SrcPort:45755 DestPort:9000 Seq:1279122701 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:30 tcp packet: &{SrcPort:45755 DestPort:9000 Seq:1279122702 Ack:922481458 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:30 connection established
2021/03/25 17:08:30 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 178 187 54 250 108 146 76 61 221 14 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:30 checksumer: &{sum:496378 oddByte:33 length:39}
2021/03/25 17:08:30 ret:  496411
2021/03/25 17:08:30 ret:  37666
2021/03/25 17:08:30 ret:  37666
2021/03/25 17:08:30 boom packet injected
2021/03/25 17:08:30 tcp packet: &{SrcPort:45755 DestPort:9000 Seq:1279122702 Ack:922481458 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:32 tcp packet: &{SrcPort:36641 DestPort:9000 Seq:1194490267 Ack:2652621579 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:32 tcp packet: &{SrcPort:40429 DestPort:9000 Seq:3778278421 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:32 tcp packet: &{SrcPort:40429 DestPort:9000 Seq:3778278422 Ack:2314050268 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:32 connection established
2021/03/25 17:08:32 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 157 237 137 236 20 60 225 51 244 22 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:32 checksumer: &{sum:483212 oddByte:33 length:39}
2021/03/25 17:08:32 ret:  483245
2021/03/25 17:08:32 ret:  24500
2021/03/25 17:08:32 ret:  24500
2021/03/25 17:08:32 boom packet injected
2021/03/25 17:08:32 tcp packet: &{SrcPort:40429 DestPort:9000 Seq:3778278422 Ack:2314050268 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:34 tcp packet: &{SrcPort:41639 DestPort:9000 Seq:3445586118 Ack:2325193820 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:34 tcp packet: &{SrcPort:43617 DestPort:9000 Seq:1300211805 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:34 tcp packet: &{SrcPort:43617 DestPort:9000 Seq:1300211806 Ack:2294752656 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:34 connection established
2021/03/25 17:08:34 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 170 97 136 197 158 240 77 127 168 94 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:34 checksumer: &{sum:521282 oddByte:33 length:39}
2021/03/25 17:08:34 ret:  521315
2021/03/25 17:08:34 ret:  62570
2021/03/25 17:08:34 ret:  62570
2021/03/25 17:08:34 boom packet injected
2021/03/25 17:08:34 tcp packet: &{SrcPort:43617 DestPort:9000 Seq:1300211806 Ack:2294752656 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:36 tcp packet: &{SrcPort:43561 DestPort:9000 Seq:3080572071 Ack:4271857572 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:36 tcp packet: &{SrcPort:38055 DestPort:9000 Seq:3278399949 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:36 tcp packet: &{SrcPort:38055 DestPort:9000 Seq:3278399950 Ack:3263038257 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:36 connection established
2021/03/25 17:08:36 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 148 167 194 124 124 145 195 104 105 206 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:36 checksumer: &{sum:519035 oddByte:33 length:39}
2021/03/25 17:08:36 ret:  519068
2021/03/25 17:08:36 ret:  60323
2021/03/25 17:08:36 ret:  60323
2021/03/25 17:08:36 boom packet injected
2021/03/25 17:08:36 tcp packet: &{SrcPort:38055 DestPort:9000 Seq:3278399950 Ack:3263038257 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:38 tcp packet: &{SrcPort:35529 DestPort:9000 Seq:472935180 Ack:923323092 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:38 tcp packet: &{SrcPort:44787 DestPort:9000 Seq:1119072716 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:38 tcp packet: &{SrcPort:44787 DestPort:9000 Seq:1119072717 Ack:1498974094 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:38 connection established
2021/03/25 17:08:38 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 174 243 89 87 0 238 66 179 177 205 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:38 checksumer: &{sum:571511 oddByte:33 length:39}
2021/03/25 17:08:38 ret:  571544
2021/03/25 17:08:38 ret:  47264
2021/03/25 17:08:38 ret:  47264
2021/03/25 17:08:38 boom packet injected
2021/03/25 17:08:38 tcp packet: &{SrcPort:44787 DestPort:9000 Seq:1119072717 Ack:1498974094 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:40 tcp packet: &{SrcPort:45755 DestPort:9000 Seq:1279122703 Ack:922481459 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:40 tcp packet: &{SrcPort:44319 DestPort:9000 Seq:1672432953 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:40 tcp packet: &{SrcPort:44319 DestPort:9000 Seq:1672432954 Ack:1181086532 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:40 connection established
2021/03/25 17:08:40 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 173 31 70 100 108 164 99 175 77 58 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:40 checksumer: &{sum:462988 oddByte:33 length:39}
2021/03/25 17:08:40 ret:  463021
2021/03/25 17:08:40 ret:  4276
2021/03/25 17:08:40 ret:  4276
2021/03/25 17:08:40 boom packet injected
2021/03/25 17:08:40 tcp packet: &{SrcPort:44319 DestPort:9000 Seq:1672432954 Ack:1181086532 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:42 tcp packet: &{SrcPort:40429 DestPort:9000 Seq:3778278423 Ack:2314050269 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:42 tcp packet: &{SrcPort:38623 DestPort:9000 Seq:3247214411 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:42 tcp packet: &{SrcPort:38623 DestPort:9000 Seq:3247214412 Ack:3450275578 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:42 connection established
2021/03/25 17:08:42 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 150 223 205 165 128 90 193 140 143 76 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:42 checksumer: &{sum:505776 oddByte:33 length:39}
2021/03/25 17:08:42 ret:  505809
2021/03/25 17:08:42 ret:  47064
2021/03/25 17:08:42 ret:  47064
2021/03/25 17:08:42 boom packet injected
2021/03/25 17:08:42 tcp packet: &{SrcPort:38623 DestPort:9000 Seq:3247214412 Ack:3450275578 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:44 tcp packet: &{SrcPort:43617 DestPort:9000 Seq:1300211807 Ack:2294752657 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:44 tcp packet: &{SrcPort:44425 DestPort:9000 Seq:26120802 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:44 tcp packet: &{SrcPort:44425 DestPort:9000 Seq:26120803 Ack:1984621995 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:44 connection established
2021/03/25 17:08:44 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 173 137 118 73 103 11 1 142 146 99 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:44 checksumer: &{sum:446106 oddByte:33 length:39}
2021/03/25 17:08:44 ret:  446139
2021/03/25 17:08:44 ret:  52929
2021/03/25 17:08:44 ret:  52929
2021/03/25 17:08:44 boom packet injected
2021/03/25 17:08:44 tcp packet: &{SrcPort:44425 DestPort:9000 Seq:26120803 Ack:1984621995 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:46 tcp packet: &{SrcPort:38055 DestPort:9000 Seq:3278399951 Ack:3263038258 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:46 tcp packet: &{SrcPort:34357 DestPort:9000 Seq:3245635709 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:46 tcp packet: &{SrcPort:34357 DestPort:9000 Seq:3245635710 Ack:2773733733 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:46 connection established
2021/03/25 17:08:46 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 134 53 165 82 74 197 193 116 120 126 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:46 checksumer: &{sum:474923 oddByte:33 length:39}
2021/03/25 17:08:46 ret:  474956
2021/03/25 17:08:46 ret:  16211
2021/03/25 17:08:46 ret:  16211
2021/03/25 17:08:46 boom packet injected
2021/03/25 17:08:46 tcp packet: &{SrcPort:34357 DestPort:9000 Seq:3245635710 Ack:2773733733 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:48 tcp packet: &{SrcPort:44787 DestPort:9000 Seq:1119072718 Ack:1498974095 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:48 tcp packet: &{SrcPort:36439 DestPort:9000 Seq:3411836790 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:48 tcp packet: &{SrcPort:36439 DestPort:9000 Seq:3411836791 Ack:2533649830 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:48 connection established
2021/03/25 17:08:48 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 142 87 151 2 231 6 203 92 127 119 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:48 checksumer: &{sum:406483 oddByte:33 length:39}
2021/03/25 17:08:48 ret:  406516
2021/03/25 17:08:48 ret:  13306
2021/03/25 17:08:48 ret:  13306
2021/03/25 17:08:48 boom packet injected
2021/03/25 17:08:48 tcp packet: &{SrcPort:36439 DestPort:9000 Seq:3411836791 Ack:2533649830 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:50 tcp packet: &{SrcPort:44319 DestPort:9000 Seq:1672432955 Ack:1181086533 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:50 tcp packet: &{SrcPort:36953 DestPort:9000 Seq:3832547212 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:50 tcp packet: &{SrcPort:36953 DestPort:9000 Seq:3832547213 Ack:3670984791 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:50 connection established
2021/03/25 17:08:50 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 144 89 218 205 65 183 228 112 7 141 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:50 checksumer: &{sum:514835 oddByte:33 length:39}
2021/03/25 17:08:50 ret:  514868
2021/03/25 17:08:50 ret:  56123
2021/03/25 17:08:50 ret:  56123
2021/03/25 17:08:50 boom packet injected
2021/03/25 17:08:50 tcp packet: &{SrcPort:36953 DestPort:9000 Seq:3832547213 Ack:3670984791 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:52 tcp packet: &{SrcPort:38623 DestPort:9000 Seq:3247214413 Ack:3450275579 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:52 tcp packet: &{SrcPort:44927 DestPort:9000 Seq:588469833 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:52 tcp packet: &{SrcPort:44927 DestPort:9000 Seq:588469834 Ack:4025030966 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:52 connection established
2021/03/25 17:08:52 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 175 127 239 231 146 150 35 19 86 74 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:52 checksumer: &{sum:481830 oddByte:33 length:39}
2021/03/25 17:08:52 ret:  481863
2021/03/25 17:08:52 ret:  23118
2021/03/25 17:08:52 ret:  23118
2021/03/25 17:08:52 boom packet injected
2021/03/25 17:08:52 tcp packet: &{SrcPort:44927 DestPort:9000 Seq:588469834 Ack:4025030966 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:54 tcp packet: &{SrcPort:44425 DestPort:9000 Seq:26120804 Ack:1984621996 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:54 tcp packet: &{SrcPort:44505 DestPort:9000 Seq:159175117 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:54 tcp packet: &{SrcPort:44505 DestPort:9000 Seq:159175118 Ack:374595917 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:54 connection established
2021/03/25 17:08:54 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 173 217 22 82 90 173 9 124 209 206 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:54 checksumer: &{sum:533108 oddByte:33 length:39}
2021/03/25 17:08:54 ret:  533141
2021/03/25 17:08:54 ret:  8861
2021/03/25 17:08:54 ret:  8861
2021/03/25 17:08:54 boom packet injected
2021/03/25 17:08:54 tcp packet: &{SrcPort:44505 DestPort:9000 Seq:159175118 Ack:374595917 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:56 tcp packet: &{SrcPort:34357 DestPort:9000 Seq:3245635711 Ack:2773733734 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:56 tcp packet: &{SrcPort:42447 DestPort:9000 Seq:2646767884 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:56 tcp packet: &{SrcPort:42447 DestPort:9000 Seq:2646767885 Ack:4140584292 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:56 connection established
2021/03/25 17:08:56 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 165 207 246 202 198 196 157 194 121 13 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:56 checksumer: &{sum:536052 oddByte:33 length:39}
2021/03/25 17:08:56 ret:  536085
2021/03/25 17:08:56 ret:  11805
2021/03/25 17:08:56 ret:  11805
2021/03/25 17:08:56 boom packet injected
2021/03/25 17:08:56 tcp packet: &{SrcPort:42447 DestPort:9000 Seq:2646767885 Ack:4140584292 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:58 tcp packet: &{SrcPort:36439 DestPort:9000 Seq:3411836792 Ack:2533649831 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:58 tcp packet: &{SrcPort:44721 DestPort:9000 Seq:413891525 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:08:58 tcp packet: &{SrcPort:44721 DestPort:9000 Seq:413891526 Ack:2700678719 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:08:58 connection established
2021/03/25 17:08:58 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 174 177 160 247 143 159 24 171 123 198 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:08:58 checksumer: &{sum:571629 oddByte:33 length:39}
2021/03/25 17:08:58 ret:  571662
2021/03/25 17:08:58 ret:  47382
2021/03/25 17:08:58 ret:  47382
2021/03/25 17:08:58 boom packet injected
2021/03/25 17:08:58 tcp packet: &{SrcPort:44721 DestPort:9000 Seq:413891526 Ack:2700678719 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:00 tcp packet: &{SrcPort:36953 DestPort:9000 Seq:3832547214 Ack:3670984792 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:00 tcp packet: &{SrcPort:33561 DestPort:9000 Seq:2006632727 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:09:00 tcp packet: &{SrcPort:33561 DestPort:9000 Seq:2006632728 Ack:1978934128 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:00 connection established
2021/03/25 17:09:00 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 131 25 117 242 156 208 119 154 201 24 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:09:00 checksumer: &{sum:495185 oddByte:33 length:39}
2021/03/25 17:09:00 ret:  495218
2021/03/25 17:09:00 ret:  36473
2021/03/25 17:09:00 ret:  36473
2021/03/25 17:09:00 boom packet injected
2021/03/25 17:09:00 tcp packet: &{SrcPort:33561 DestPort:9000 Seq:2006632728 Ack:1978934128 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:02 tcp packet: &{SrcPort:44927 DestPort:9000 Seq:588469835 Ack:4025030967 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:02 tcp packet: &{SrcPort:39581 DestPort:9000 Seq:634354749 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:09:02 tcp packet: &{SrcPort:39581 DestPort:9000 Seq:634354750 Ack:260164776 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:02 connection established
2021/03/25 17:09:02 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 154 157 15 128 70 8 37 207 124 62 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:09:02 checksumer: &{sum:471565 oddByte:33 length:39}
2021/03/25 17:09:02 ret:  471598
2021/03/25 17:09:02 ret:  12853
2021/03/25 17:09:02 ret:  12853
2021/03/25 17:09:02 boom packet injected
2021/03/25 17:09:02 tcp packet: &{SrcPort:39581 DestPort:9000 Seq:634354750 Ack:260164776 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:04 tcp packet: &{SrcPort:44505 DestPort:9000 Seq:159175119 Ack:374595918 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:04 tcp packet: &{SrcPort:42419 DestPort:9000 Seq:1341355953 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:09:04 tcp packet: &{SrcPort:42419 DestPort:9000 Seq:1341355954 Ack:461329051 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:04 connection established
2021/03/25 17:09:04 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 165 179 27 125 203 251 79 243 119 178 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:09:04 checksumer: &{sum:577742 oddByte:33 length:39}
2021/03/25 17:09:04 ret:  577775
2021/03/25 17:09:04 ret:  53495
2021/03/25 17:09:04 ret:  53495
2021/03/25 17:09:04 boom packet injected
2021/03/25 17:09:04 tcp packet: &{SrcPort:42419 DestPort:9000 Seq:1341355954 Ack:461329051 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:06 tcp packet: &{SrcPort:42447 DestPort:9000 Seq:2646767886 Ack:4140584293 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:06 tcp packet: &{SrcPort:37761 DestPort:9000 Seq:3613906296 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:09:06 tcp packet: &{SrcPort:37761 DestPort:9000 Seq:3613906297 Ack:2367295960 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:06 connection established
2021/03/25 17:09:06 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 147 129 141 24 139 56 215 103 213 121 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:09:06 checksumer: &{sum:438996 oddByte:33 length:39}
2021/03/25 17:09:06 ret:  439029
2021/03/25 17:09:06 ret:  45819
2021/03/25 17:09:06 ret:  45819
2021/03/25 17:09:06 boom packet injected
2021/03/25 17:09:06 tcp packet: &{SrcPort:37761 DestPort:9000 Seq:3613906297 Ack:2367295960 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:08 tcp packet: &{SrcPort:44721 DestPort:9000 Seq:413891527 Ack:2700678720 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:08 tcp packet: &{SrcPort:42219 DestPort:9000 Seq:1080173176 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:09:08 tcp packet: &{SrcPort:42219 DestPort:9000 Seq:1080173177 Ack:1101860289 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:08 connection established
2021/03/25 17:09:08 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 164 235 65 171 135 33 64 98 34 121 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:09:08 checksumer: &{sum:496203 oddByte:33 length:39}
2021/03/25 17:09:08 ret:  496236
2021/03/25 17:09:08 ret:  37491
2021/03/25 17:09:08 ret:  37491
2021/03/25 17:09:08 boom packet injected
2021/03/25 17:09:08 tcp packet: &{SrcPort:42219 DestPort:9000 Seq:1080173177 Ack:1101860289 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:10 tcp packet: &{SrcPort:33561 DestPort:9000 Seq:2006632729 Ack:1978934129 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:10 tcp packet: &{SrcPort:45539 DestPort:9000 Seq:1558971852 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:09:10 tcp packet: &{SrcPort:45539 DestPort:9000 Seq:1558971853 Ack:2619966871 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:10 connection established
2021/03/25 17:09:10 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 177 227 156 39 254 247 92 236 5 205 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:09:10 checksumer: &{sum:572201 oddByte:33 length:39}
2021/03/25 17:09:10 ret:  572234
2021/03/25 17:09:10 ret:  47954
2021/03/25 17:09:10 ret:  47954
2021/03/25 17:09:10 boom packet injected
2021/03/25 17:09:10 tcp packet: &{SrcPort:45539 DestPort:9000 Seq:1558971853 Ack:2619966871 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:12 tcp packet: &{SrcPort:39581 DestPort:9000 Seq:634354751 Ack:260164777 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:12 tcp packet: &{SrcPort:42431 DestPort:9000 Seq:1276021088 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:09:12 tcp packet: &{SrcPort:42431 DestPort:9000 Seq:1276021089 Ack:4017880015 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:12 connection established
2021/03/25 17:09:12 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 165 191 239 122 117 47 76 14 137 97 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:09:12 checksumer: &{sum:448603 oddByte:33 length:39}
2021/03/25 17:09:12 ret:  448636
2021/03/25 17:09:12 ret:  55426
2021/03/25 17:09:12 ret:  55426
2021/03/25 17:09:12 boom packet injected
2021/03/25 17:09:12 tcp packet: &{SrcPort:42431 DestPort:9000 Seq:1276021089 Ack:4017880015 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:14 tcp packet: &{SrcPort:42419 DestPort:9000 Seq:1341355955 Ack:461329052 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:14 tcp packet: &{SrcPort:45957 DestPort:9000 Seq:3575156151 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:09:14 tcp packet: &{SrcPort:45957 DestPort:9000 Seq:3575156152 Ack:712353105 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:14 connection established
2021/03/25 17:09:14 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 179 133 42 116 30 177 213 24 141 184 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:09:14 checksumer: &{sum:490202 oddByte:33 length:39}
2021/03/25 17:09:14 ret:  490235
2021/03/25 17:09:14 ret:  31490
2021/03/25 17:09:14 ret:  31490
2021/03/25 17:09:14 boom packet injected
2021/03/25 17:09:14 tcp packet: &{SrcPort:45957 DestPort:9000 Seq:3575156152 Ack:712353105 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:16 tcp packet: &{SrcPort:37761 DestPort:9000 Seq:3613906298 Ack:2367295961 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:16 tcp packet: &{SrcPort:33281 DestPort:9000 Seq:3010906686 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:09:16 tcp packet: &{SrcPort:33281 DestPort:9000 Seq:3010906687 Ack:517046773 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:16 connection established
2021/03/25 17:09:16 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 130 1 30 207 251 85 179 118 202 63 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:09:16 checksumer: &{sum:449429 oddByte:33 length:39}
2021/03/25 17:09:16 ret:  449462
2021/03/25 17:09:16 ret:  56252
2021/03/25 17:09:16 ret:  56252
2021/03/25 17:09:16 boom packet injected
2021/03/25 17:09:16 tcp packet: &{SrcPort:33281 DestPort:9000 Seq:3010906687 Ack:517046773 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:18 tcp packet: &{SrcPort:42219 DestPort:9000 Seq:1080173178 Ack:1101860290 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:18 tcp packet: &{SrcPort:33133 DestPort:9000 Seq:3330988893 Ack:0 Flags:40962 WindowSize:64240 Checksum:6615 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.144
2021/03/25 17:09:18 tcp packet: &{SrcPort:33133 DestPort:9000 Seq:3330988894 Ack:1136031173 Flags:32784 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.144
2021/03/25 17:09:18 connection established
2021/03/25 17:09:18 calling checksumTCP: 10.244.1.49 10.244.2.144 [35 40 129 109 67 180 239 37 198 138 219 94 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 17:09:18 checksumer: &{sum:470993 oddByte:33 length:39}
2021/03/25 17:09:18 ret:  471026
2021/03/25 17:09:18 ret:  12281
2021/03/25 17:09:18 ret:  12281
2021/03/25 17:09:18 boom packet injected
2021/03/25 17:09:18 tcp packet: &{SrcPort:33133 DestPort:9000 Seq:3330988894 Ack:1136031173 Flags:32785 WindowSize:502 Checksum:6607 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.144

Mar 25 17:09:18.816: INFO: boom-server OK: did not receive any RST packet
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:09:18.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-7476" for this suite.

• [SLOW TEST:87.363 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":51,"completed":30,"skipped":4615,"failed":2,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking 
  should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:09:18.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
STEP: Performing setup for networking test in namespace nettest-9775
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 25 17:09:19.416: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 25 17:09:20.202: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:09:22.208: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:09:24.381: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:09:26.370: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:09:28.207: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:09:30.207: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:09:32.205: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:09:34.439: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:09:36.206: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:09:38.207: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:09:40.207: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:09:42.207: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:09:44.208: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 25 17:09:44.214: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 25 17:09:50.532: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 25 17:09:50.532: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 25 17:09:50.837: INFO: Service node-port-service in namespace nettest-9775 found.
Mar 25 17:09:51.052: INFO: Service session-affinity-service in namespace nettest-9775 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 25 17:09:52.478: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 25 17:09:53.532: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: checking kube-proxy URLs
STEP: Getting kube-proxy self URL /healthz
Mar 25 17:09:53.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=nettest-9775 exec host-test-container-pod -- /bin/sh -x -c curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz'
Mar 25 17:09:53.730: INFO: stderr: "+ curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz\n"
Mar 25 17:09:53.730: INFO: stdout: "HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nX-Content-Type-Options: nosniff\r\nDate: Thu, 25 Mar 2021 17:09:53 GMT\r\nContent-Length: 155\r\n\r\n{\"lastUpdated\": \"2021-03-25 17:09:53.720463908 +0000 UTC m=+291766.291397365\",\"currentTime\": \"2021-03-25 17:09:53.720463908 +0000 UTC m=+291766.291397365\"}"
STEP: Getting kube-proxy self URL /healthz
Mar 25 17:09:53.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=nettest-9775 exec host-test-container-pod -- /bin/sh -x -c curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz'
Mar 25 17:09:53.943: INFO: stderr: "+ curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz\n"
Mar 25 17:09:53.943: INFO: stdout: "HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nX-Content-Type-Options: nosniff\r\nDate: Thu, 25 Mar 2021 17:09:53 GMT\r\nContent-Length: 155\r\n\r\n{\"lastUpdated\": \"2021-03-25 17:09:53.932804164 +0000 UTC m=+291766.503737586\",\"currentTime\": \"2021-03-25 17:09:53.932804164 +0000 UTC m=+291766.503737586\"}"
STEP: Checking status code against http://localhost:10249/proxyMode
Mar 25 17:09:53.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=nettest-9775 exec host-test-container-pod -- /bin/sh -x -c curl -o /dev/null -i -q -s -w %{http_code} --connect-timeout 1 http://localhost:10249/proxyMode'
Mar 25 17:09:54.149: INFO: stderr: "+ curl -o /dev/null -i -q -s -w '%{http_code}' --connect-timeout 1 http://localhost:10249/proxyMode\n"
Mar 25 17:09:54.149: INFO: stdout: "200"
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:09:54.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9775" for this suite.

• [SLOW TEST:35.397 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
------------------------------
{"msg":"PASSED [sig-network] Networking should check kube-proxy urls","total":51,"completed":31,"skipped":4651,"failed":2,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should function for client IP based session affinity: udp [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:09:54.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: udp [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434
STEP: Performing setup for networking test in namespace nettest-4512
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 25 17:09:54.455: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 25 17:09:54.552: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:09:56.724: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:09:58.576: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:10:00.555: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:10:02.558: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:10:04.633: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:10:06.557: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:10:08.557: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:10:10.557: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:10:12.556: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:10:14.557: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:10:16.557: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 25 17:10:16.562: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 25 17:10:22.644: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 25 17:10:22.644: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 25 17:10:22.768: INFO: Service node-port-service in namespace nettest-4512 found.
Mar 25 17:10:22.924: INFO: Service session-affinity-service in namespace nettest-4512 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 25 17:10:23.943: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 25 17:10:24.948: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) test-container-pod --> 10.96.112.252:90
Mar 25 17:10:24.959: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.59:9080/dial?request=hostName&protocol=udp&host=10.96.112.252&port=90&tries=1'] Namespace:nettest-4512 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:10:24.959: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:10:25.073: INFO: Tries: 10, in try: 0, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4512, hostIp: 172.18.0.15, podIp: 10.244.1.59, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  }]" }
Mar 25 17:10:27.079: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.59:9080/dial?request=hostName&protocol=udp&host=10.96.112.252&port=90&tries=1'] Namespace:nettest-4512 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:10:27.079: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:10:27.228: INFO: Tries: 10, in try: 1, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4512, hostIp: 172.18.0.15, podIp: 10.244.1.59, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  }]" }
Mar 25 17:10:29.232: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.59:9080/dial?request=hostName&protocol=udp&host=10.96.112.252&port=90&tries=1'] Namespace:nettest-4512 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:10:29.232: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:10:29.363: INFO: Tries: 10, in try: 2, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4512, hostIp: 172.18.0.15, podIp: 10.244.1.59, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  }]" }
Mar 25 17:10:31.368: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.59:9080/dial?request=hostName&protocol=udp&host=10.96.112.252&port=90&tries=1'] Namespace:nettest-4512 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:10:31.368: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:10:31.490: INFO: Tries: 10, in try: 3, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4512, hostIp: 172.18.0.15, podIp: 10.244.1.59, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  }]" }
Mar 25 17:10:33.494: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.59:9080/dial?request=hostName&protocol=udp&host=10.96.112.252&port=90&tries=1'] Namespace:nettest-4512 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:10:33.495: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:10:33.588: INFO: Tries: 10, in try: 4, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4512, hostIp: 172.18.0.15, podIp: 10.244.1.59, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  }]" }
Mar 25 17:10:35.592: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.59:9080/dial?request=hostName&protocol=udp&host=10.96.112.252&port=90&tries=1'] Namespace:nettest-4512 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:10:35.592: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:10:35.706: INFO: Tries: 10, in try: 5, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4512, hostIp: 172.18.0.15, podIp: 10.244.1.59, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  }]" }
Mar 25 17:10:37.711: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.59:9080/dial?request=hostName&protocol=udp&host=10.96.112.252&port=90&tries=1'] Namespace:nettest-4512 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:10:37.711: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:10:37.849: INFO: Tries: 10, in try: 6, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4512, hostIp: 172.18.0.15, podIp: 10.244.1.59, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  }]" }
Mar 25 17:10:39.855: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.59:9080/dial?request=hostName&protocol=udp&host=10.96.112.252&port=90&tries=1'] Namespace:nettest-4512 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:10:39.855: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:10:39.960: INFO: Tries: 10, in try: 7, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4512, hostIp: 172.18.0.15, podIp: 10.244.1.59, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  }]" }
Mar 25 17:10:41.964: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.59:9080/dial?request=hostName&protocol=udp&host=10.96.112.252&port=90&tries=1'] Namespace:nettest-4512 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:10:41.964: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:10:42.105: INFO: Tries: 10, in try: 8, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4512, hostIp: 172.18.0.15, podIp: 10.244.1.59, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  }]" }
Mar 25 17:10:44.109: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.59:9080/dial?request=hostName&protocol=udp&host=10.96.112.252&port=90&tries=1'] Namespace:nettest-4512 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:10:44.109: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:10:44.224: INFO: Tries: 10, in try: 9, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4512, hostIp: 172.18.0.15, podIp: 10.244.1.59, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:10:16 +0000 UTC  }]" }
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:10:46.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4512" for this suite.

• [SLOW TEST:52.014 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: udp [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]","total":51,"completed":32,"skipped":4692,"failed":2,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should function for pod-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:10:46.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
STEP: Performing setup for networking test in namespace nettest-1104
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 25 17:10:46.398: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 25 17:10:46.445: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:10:48.604: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:10:50.484: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:10:52.634: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:10:54.450: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:10:56.451: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:10:58.449: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:11:00.450: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:11:02.450: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:11:04.450: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:11:06.449: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 25 17:11:06.454: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 25 17:11:10.481: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 25 17:11:10.481: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 25 17:11:10.611: INFO: Service node-port-service in namespace nettest-1104 found.
Mar 25 17:11:10.715: INFO: Service session-affinity-service in namespace nettest-1104 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 25 17:11:11.724: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 25 17:11:12.729: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) test-container-pod --> 10.96.83.93:90 (config.clusterIP)
Mar 25 17:11:12.736: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.61:9080/dial?request=hostname&protocol=udp&host=10.96.83.93&port=90&tries=1'] Namespace:nettest-1104 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:11:12.736: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:11:12.855: INFO: Waiting for responses: map[netserver-1:{}]
Mar 25 17:11:14.879: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.61:9080/dial?request=hostname&protocol=udp&host=10.96.83.93&port=90&tries=1'] Namespace:nettest-1104 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:11:14.879: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:11:15.000: INFO: Waiting for responses: map[netserver-1:{}]
Mar 25 17:11:17.005: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.61:9080/dial?request=hostname&protocol=udp&host=10.96.83.93&port=90&tries=1'] Namespace:nettest-1104 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:11:17.005: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:11:17.128: INFO: Waiting for responses: map[netserver-1:{}]
Mar 25 17:11:19.131: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.61:9080/dial?request=hostname&protocol=udp&host=10.96.83.93&port=90&tries=1'] Namespace:nettest-1104 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:11:19.131: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:11:19.249: INFO: Waiting for responses: map[]
Mar 25 17:11:19.249: INFO: reached 10.96.83.93 after 3/34 tries
STEP: dialing(udp) test-container-pod --> 172.18.0.17:30713 (nodeIP)
Mar 25 17:11:19.251: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.61:9080/dial?request=hostname&protocol=udp&host=172.18.0.17&port=30713&tries=1'] Namespace:nettest-1104 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:11:19.251: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:11:19.340: INFO: Waiting for responses: map[netserver-1:{}]
Mar 25 17:11:21.343: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.61:9080/dial?request=hostname&protocol=udp&host=172.18.0.17&port=30713&tries=1'] Namespace:nettest-1104 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:11:21.343: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:11:21.462: INFO: Waiting for responses: map[]
Mar 25 17:11:21.462: INFO: reached 172.18.0.17 after 1/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:11:21.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1104" for this suite.

• [SLOW TEST:35.232 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: udp
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: udp","total":51,"completed":33,"skipped":4894,"failed":2,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should support basic nodePort: udp functionality
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:11:21.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should support basic nodePort: udp functionality
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387
STEP: Performing setup for networking test in namespace nettest-1768
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 25 17:11:21.676: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 25 17:11:21.750: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:11:23.765: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:11:25.768: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:11:28.077: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:11:29.878: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:11:32.046: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:11:33.856: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:11:35.780: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:11:37.795: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:11:39.819: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 25 17:11:39.825: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 25 17:11:41.829: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 25 17:11:43.849: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 25 17:11:49.895: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 25 17:11:49.895: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 25 17:11:49.959: INFO: Service node-port-service in namespace nettest-1768 found.
Mar 25 17:11:50.033: INFO: Service session-affinity-service in namespace nettest-1768 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 25 17:11:51.065: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 25 17:11:52.070: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) 172.18.0.17 (node) --> 172.18.0.17:31090 (nodeIP) and getting ALL host endpoints
Mar 25 17:11:52.077: INFO: Going to poll 172.18.0.17 on port 31090 at least 0 times, with a maximum of 34 tries before failing
Mar 25 17:11:52.079: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 31090 | grep -v '^\s*$'] Namespace:nettest-1768 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:11:52.079: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:11:53.211: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0])
Mar 25 17:11:55.213: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 31090 | grep -v '^\s*$'] Namespace:nettest-1768 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:11:55.213: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:11:56.349: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:11:56.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1768" for this suite.

• [SLOW TEST:34.884 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should support basic nodePort: udp functionality
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality","total":51,"completed":34,"skipped":5016,"failed":2,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:11:56.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
STEP: Performing setup for networking test in namespace nettest-4319
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 25 17:11:56.561: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 25 17:11:56.637: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:11:58.651: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:12:00.642: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:12:02.642: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:12:04.642: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:12:06.645: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:12:08.642: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:12:10.643: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 25 17:12:10.649: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 25 17:12:14.767: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 25 17:12:14.767: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 25 17:12:14.913: INFO: Service node-port-service in namespace nettest-4319 found.
Mar 25 17:12:15.144: INFO: Service session-affinity-service in namespace nettest-4319 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 25 17:12:16.171: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 25 17:12:17.177: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) 172.18.0.17 (node) --> 172.18.0.17:31974 (nodeIP) and getting ALL host endpoints
Mar 25 17:12:17.203: INFO: Going to poll 172.18.0.17 on port 31974 at least 0 times, with a maximum of 34 tries before failing
Mar 25 17:12:17.205: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:12:17.205: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:12:17.349: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-1])
Mar 25 17:12:19.364: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:12:19.365: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:12:19.500: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1]
STEP: Deleting the node port access point
STEP: dialing(http) 172.18.0.17 (node) --> 172.18.0.17:31974 (nodeIP) and getting ZERO host endpoints
Mar 25 17:12:34.587: INFO: Going to poll 172.18.0.17 on port 31974 at least 34 times, with a maximum of 34 tries before failing
Mar 25 17:12:35.575: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:12:35.575: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:12:37.056: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:12:37.056: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:12:39.444: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:12:39.444: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:12:39.763: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:12:39.763: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:12:41.892: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:12:41.892: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:12:41.997: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:12:41.997: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:12:44.132: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:12:44.132: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:12:44.265: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:12:44.265: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:12:46.293: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:12:46.293: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:12:46.422: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:12:46.422: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:12:48.427: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:12:48.427: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:12:48.544: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:12:48.544: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:12:50.549: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:12:50.549: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:12:50.683: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:12:50.683: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:12:52.689: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:12:52.689: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:12:52.818: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:12:52.818: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:12:54.821: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:12:54.821: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:12:54.945: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:12:54.945: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:12:56.950: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:12:56.950: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:12:57.090: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:12:57.090: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:12:59.095: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:12:59.095: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:12:59.201: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:12:59.201: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:01.209: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:01.209: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:01.352: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:01.352: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:03.357: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:03.357: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:03.471: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:03.471: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:05.475: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:05.475: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:05.589: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:05.589: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:07.594: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:07.594: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:07.705: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:07.705: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:09.709: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:09.709: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:09.835: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:09.835: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:11.850: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:11.850: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:11.967: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:11.968: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:13.972: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:13.972: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:14.084: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:14.084: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:16.088: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:16.089: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:16.221: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:16.221: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:18.226: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:18.226: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:18.329: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:18.329: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:20.335: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:20.335: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:20.437: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:20.437: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:22.442: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:22.442: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:22.567: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:22.567: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:24.571: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:24.571: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:24.690: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:24.690: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:26.694: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:26.694: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:26.793: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:26.793: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:28.802: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:28.802: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:28.925: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:28.925: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:30.929: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:30.929: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:31.068: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:31.068: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:33.073: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:33.073: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:33.181: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:33.181: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:35.185: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:35.185: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:35.319: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:35.319: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:37.347: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:37.347: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:37.435: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:37.435: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:39.440: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:39.440: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:39.564: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:39.564: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:41.568: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:41.568: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:41.681: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:41.681: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:43.684: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:43.685: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:43.798: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:43.798: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:45.803: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:45.803: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:45.919: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:45.919: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 25 17:13:47.923: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\s*$'] Namespace:nettest-4319 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:13:47.923: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:13:48.025: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:31974/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 25 17:13:48.025: INFO: Found all 0 expected endpoints: []
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:13:48.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4319" for this suite.

• [SLOW TEST:111.674 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: http [Slow]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]","total":51,"completed":35,"skipped":5222,"failed":2,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:13:48.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
STEP: creating up-down-1 in namespace services-6640
STEP: creating service up-down-1 in namespace services-6640
STEP: creating replication controller up-down-1 in namespace services-6640
I0325 17:13:48.155727       7 runners.go:190] Created replication controller with name: up-down-1, namespace: services-6640, replica count: 3
I0325 17:13:51.206676       7 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 17:13:54.207059       7 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating up-down-2 in namespace services-6640
STEP: creating service up-down-2 in namespace services-6640
STEP: creating replication controller up-down-2 in namespace services-6640
I0325 17:13:54.834188       7 runners.go:190] Created replication controller with name: up-down-2, namespace: services-6640, replica count: 3
I0325 17:13:57.886135       7 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 17:14:00.887388       7 runners.go:190] up-down-2 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 17:14:03.888487       7 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-1 is up
Mar 25 17:14:03.891: INFO: Creating new host exec pod
Mar 25 17:14:03.928: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:14:06.145: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:14:07.931: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Mar 25 17:14:07.931: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Mar 25 17:14:13.967: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.232.42:80 2>&1 || true; echo; done" in pod services-6640/verify-service-up-host-exec-pod
Mar 25 17:14:13.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-6640 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.232.42:80 2>&1 || true; echo; done'
Mar 25 17:14:23.648: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n"
Mar 25 17:14:23.649: INFO: stdout: "up-down-1-ghfng\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-ghfng\n"
Mar 25 17:14:23.649: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.232.42:80 2>&1 || true; echo; done" in pod services-6640/verify-service-up-exec-pod-z4t2w
Mar 25 17:14:23.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-6640 exec verify-service-up-exec-pod-z4t2w -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.232.42:80 2>&1 || true; echo; done'
Mar 25 17:14:24.116: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.232.42:80\n+ echo\n"
Mar 25 17:14:24.116: INFO: stdout: "up-down-1-ghfng\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-nvk5j\nup-down-1-nvk5j\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-ghfng\nup-down-1-kvk4f\nup-down-1-kvk4f\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-6640
STEP: Deleting pod verify-service-up-exec-pod-z4t2w in namespace services-6640
STEP: verifying service up-down-2 is up
Mar 25 17:14:24.206: INFO: Creating new host exec pod
Mar 25 17:14:24.224: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:14:26.342: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:14:28.230: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Mar 25 17:14:28.231: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Mar 25 17:14:32.246: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.121.21:80 2>&1 || true; echo; done" in pod services-6640/verify-service-up-host-exec-pod
Mar 25 17:14:32.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-6640 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.121.21:80 2>&1 || true; echo; done'
Mar 25 17:14:32.672: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n"
Mar 25 17:14:32.672: INFO: stdout: "up-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\n"
Mar 25 17:14:32.672: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.121.21:80 2>&1 || true; echo; done" in pod services-6640/verify-service-up-exec-pod-t4cw4
Mar 25 17:14:32.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-6640 exec verify-service-up-exec-pod-t4cw4 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.121.21:80 2>&1 || true; echo; done'
Mar 25 17:14:33.076: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n"
Mar 25 17:14:33.076: INFO: stdout: "up-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-6640
STEP: Deleting pod verify-service-up-exec-pod-t4cw4 in namespace services-6640
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-6640, will wait for the garbage collector to delete the pods
Mar 25 17:14:33.676: INFO: Deleting ReplicationController up-down-1 took: 8.468293ms
Mar 25 17:14:34.277: INFO: Terminating ReplicationController up-down-1 pods took: 601.069496ms
STEP: verifying service up-down-1 is not up
Mar 25 17:14:45.843: INFO: Creating new host exec pod
Mar 25 17:14:45.902: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:14:47.907: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:14:49.907: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Mar 25 17:14:49.907: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-6640 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.232.42:80 && echo service-down-failed'
Mar 25 17:14:52.121: INFO: rc: 28
Mar 25 17:14:52.121: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.232.42:80 && echo service-down-failed" in pod services-6640/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-6640 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.232.42:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.96.232.42:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-6640
STEP: verifying service up-down-2 is still up
Mar 25 17:14:52.176: INFO: Creating new host exec pod
Mar 25 17:14:52.274: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:14:54.279: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:14:56.280: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Mar 25 17:14:56.280: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Mar 25 17:15:00.306: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.121.21:80 2>&1 || true; echo; done" in pod services-6640/verify-service-up-host-exec-pod
Mar 25 17:15:00.306: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-6640 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.121.21:80 2>&1 || true; echo; done'
Mar 25 17:15:00.724: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n"
Mar 25 17:15:00.724: INFO: stdout: "up-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\n"
Mar 25 17:15:00.724: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.121.21:80 2>&1 || true; echo; done" in pod services-6640/verify-service-up-exec-pod-hqs7m
Mar 25 17:15:00.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-6640 exec verify-service-up-exec-pod-hqs7m -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.121.21:80 2>&1 || true; echo; done'
Mar 25 17:15:01.158: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n"
Mar 25 17:15:01.158: INFO: stdout: "up-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-6640
STEP: Deleting pod verify-service-up-exec-pod-hqs7m in namespace services-6640
STEP: creating service up-down-3 in namespace services-6640
STEP: creating service up-down-3 in namespace services-6640
STEP: creating replication controller up-down-3 in namespace services-6640
I0325 17:15:01.633657       7 runners.go:190] Created replication controller with name: up-down-3, namespace: services-6640, replica count: 3
I0325 17:15:04.685577       7 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 17:15:07.685755       7 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-2 is still up
Mar 25 17:15:07.689: INFO: Creating new host exec pod
Mar 25 17:15:07.741: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:15:09.745: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:15:11.746: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Mar 25 17:15:11.746: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Mar 25 17:15:15.762: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.121.21:80 2>&1 || true; echo; done" in pod services-6640/verify-service-up-host-exec-pod
Mar 25 17:15:15.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-6640 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.121.21:80 2>&1 || true; echo; done'
Mar 25 17:15:16.154: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n"
Mar 25 17:15:16.154: INFO: stdout: "up-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\n"
Mar 25 17:15:16.155: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.121.21:80 2>&1 || true; echo; done" in pod services-6640/verify-service-up-exec-pod-s92cq
Mar 25 17:15:16.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-6640 exec verify-service-up-exec-pod-s92cq -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.121.21:80 2>&1 || true; echo; done'
Mar 25 17:15:16.817: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.121.21:80\n+ echo\n"
Mar 25 17:15:16.818: INFO: stdout: "up-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-gtrmh\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-5r6hm\nup-down-2-5r6hm\nup-down-2-6rqhc\nup-down-2-gtrmh\nup-down-2-6rqhc\nup-down-2-gtrmh\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-6640
STEP: Deleting pod verify-service-up-exec-pod-s92cq in namespace services-6640
STEP: verifying service up-down-3 is up
Mar 25 17:15:17.770: INFO: Creating new host exec pod
Mar 25 17:15:18.106: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:15:20.325: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:15:22.111: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Mar 25 17:15:22.111: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Mar 25 17:15:30.433: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.195.48:80 2>&1 || true; echo; done" in pod services-6640/verify-service-up-host-exec-pod
Mar 25 17:15:30.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-6640 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.195.48:80 2>&1 || true; echo; done'
Mar 25 17:15:31.183: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n"
Mar 25 17:15:31.184: INFO: stdout: "up-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-f46j4\n"
Mar 25 17:15:31.184: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.195.48:80 2>&1 || true; echo; done" in pod services-6640/verify-service-up-exec-pod-4crkd
Mar 25 17:15:31.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-6640 exec verify-service-up-exec-pod-4crkd -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.195.48:80 2>&1 || true; echo; done'
Mar 25 17:15:31.946: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.195.48:80\n+ echo\n"
Mar 25 17:15:31.946: INFO: stdout: "up-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-f46j4\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-nfwts\nup-down-3-f46j4\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-nfwts\nup-down-3-rb6m5\nup-down-3-rb6m5\nup-down-3-rb6m5\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-6640
STEP: Deleting pod verify-service-up-exec-pod-4crkd in namespace services-6640
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:15:34.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6640" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

• [SLOW TEST:106.731 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":51,"completed":36,"skipped":5340,"failed":2,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:15:34.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-6337
Mar 25 17:15:36.267: INFO: Service Port TCP: 80
STEP: changing the TCP service to type=NodePort
STEP: creating replication controller nodeport-update-service in namespace services-6337
I0325 17:15:36.637469       7 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-6337, replica count: 2
I0325 17:15:39.689027       7 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 17:15:42.690124       7 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 25 17:15:42.690: INFO: Creating new exec pod
E0325 17:15:47.915282       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 17:15:48.927180       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 17:15:51.881318       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 17:15:56.539772       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 17:16:03.814075       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 17:16:21.623260       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 17:16:54.118739       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 17:17:27.011357       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
Mar 25 17:17:47.913: FAIL: Unexpected error:
    <*errors.errorString | 0xc004d948f0>: {
        s: "no subset of available IP address found for the endpoint nodeport-update-service within timeout 2m0s",
    }
    no subset of available IP address found for the endpoint nodeport-update-service within timeout 2m0s
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.13()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002a94900)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc002a94900)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc002a94900, 0x6d60740)
	/usr/local/go/src/testing/testing.go:1194 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1239 +0x2b3
Mar 25 17:17:47.914: INFO: Cleaning up the updating NodePorts test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-6337".
STEP: Found 14 events.
Mar 25 17:17:47.975: INFO: At 2021-03-25 17:15:36 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-qmglf
Mar 25 17:17:47.975: INFO: At 2021-03-25 17:15:36 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-bsbjv
Mar 25 17:17:47.975: INFO: At 2021-03-25 17:15:36 +0000 UTC - event for nodeport-update-service-bsbjv: {default-scheduler } Scheduled: Successfully assigned services-6337/nodeport-update-service-bsbjv to latest-worker2
Mar 25 17:17:47.975: INFO: At 2021-03-25 17:15:36 +0000 UTC - event for nodeport-update-service-qmglf: {default-scheduler } Scheduled: Successfully assigned services-6337/nodeport-update-service-qmglf to latest-worker
Mar 25 17:17:47.975: INFO: At 2021-03-25 17:15:38 +0000 UTC - event for nodeport-update-service-qmglf: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 25 17:17:47.975: INFO: At 2021-03-25 17:15:39 +0000 UTC - event for nodeport-update-service-bsbjv: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 25 17:17:47.975: INFO: At 2021-03-25 17:15:40 +0000 UTC - event for nodeport-update-service-bsbjv: {kubelet latest-worker2} Created: Created container nodeport-update-service
Mar 25 17:17:47.975: INFO: At 2021-03-25 17:15:40 +0000 UTC - event for nodeport-update-service-qmglf: {kubelet latest-worker} Created: Created container nodeport-update-service
Mar 25 17:17:47.975: INFO: At 2021-03-25 17:15:40 +0000 UTC - event for nodeport-update-service-qmglf: {kubelet latest-worker} Started: Started container nodeport-update-service
Mar 25 17:17:47.975: INFO: At 2021-03-25 17:15:41 +0000 UTC - event for nodeport-update-service-bsbjv: {kubelet latest-worker2} Started: Started container nodeport-update-service
Mar 25 17:17:47.975: INFO: At 2021-03-25 17:15:43 +0000 UTC - event for execpod7jz4q: {default-scheduler } Scheduled: Successfully assigned services-6337/execpod7jz4q to latest-worker
Mar 25 17:17:47.975: INFO: At 2021-03-25 17:15:45 +0000 UTC - event for execpod7jz4q: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 25 17:17:47.975: INFO: At 2021-03-25 17:15:46 +0000 UTC - event for execpod7jz4q: {kubelet latest-worker} Started: Started container agnhost-container
Mar 25 17:17:47.975: INFO: At 2021-03-25 17:15:46 +0000 UTC - event for execpod7jz4q: {kubelet latest-worker} Created: Created container agnhost-container
Mar 25 17:17:47.978: INFO: POD                            NODE            PHASE    GRACE  CONDITIONS
Mar 25 17:17:47.978: INFO: execpod7jz4q                   latest-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:15:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:15:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:15:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:15:42 +0000 UTC  }]
Mar 25 17:17:47.978: INFO: nodeport-update-service-bsbjv  latest-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:15:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:15:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:15:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:15:36 +0000 UTC  }]
Mar 25 17:17:47.978: INFO: nodeport-update-service-qmglf  latest-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:15:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:15:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:15:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 17:15:36 +0000 UTC  }]
Mar 25 17:17:47.978: INFO: 
Mar 25 17:17:47.981: INFO: 
Logging node info for node latest-control-plane
Mar 25 17:17:47.983: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane    cc9ffc7a-24ee-4720-b82b-ca49361a1767 1265240 0 2021-03-22 08:06:26 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 17:14:41 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 17:14:41 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 17:14:41 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 17:14:41 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 25 17:17:47.983: INFO: 
Logging kubelet events for node latest-control-plane
Mar 25 17:17:48.032: INFO: 
Logging pods the kubelet thinks is on node latest-control-plane
Mar 25 17:17:48.077: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.077: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar 25 17:17:48.077: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.077: INFO: 	Container kube-scheduler ready: true, restart count 0
Mar 25 17:17:48.077: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.077: INFO: 	Container local-path-provisioner ready: true, restart count 0
Mar 25 17:17:48.077: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.077: INFO: 	Container coredns ready: true, restart count 0
Mar 25 17:17:48.077: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.077: INFO: 	Container coredns ready: true, restart count 0
Mar 25 17:17:48.077: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.077: INFO: 	Container etcd ready: true, restart count 0
Mar 25 17:17:48.077: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.077: INFO: 	Container kube-controller-manager ready: true, restart count 0
Mar 25 17:17:48.077: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.077: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 25 17:17:48.077: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.077: INFO: 	Container kube-proxy ready: true, restart count 0
W0325 17:17:48.087679       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 25 17:17:48.180: INFO: 
Latency metrics for node latest-control-plane
Mar 25 17:17:48.180: INFO: 
Logging node info for node latest-worker
Mar 25 17:17:48.183: INFO: Node Info: &Node{ObjectMeta:{latest-worker    d799492c-1b1f-4258-b431-31204511a98f 1266964 0 2021-03-22 08:06:55 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 15:45:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 15:56:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 17:07:05 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 17:17:01 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 17:17:01 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 17:17:01 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 17:17:01 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 25 17:17:48.183: INFO: 
Logging kubelet events for node latest-worker
Mar 25 17:17:48.189: INFO: 
Logging pods the kubelet thinks is on node latest-worker
Mar 25 17:17:48.212: INFO: pod-configmaps-f7a61fd5-7676-4e8e-8e36-fac18d24c41d started at 2021-03-25 17:15:58 +0000 UTC (0+3 container statuses recorded)
Mar 25 17:17:48.212: INFO: 	Container createcm-volume-test ready: true, restart count 0
Mar 25 17:17:48.212: INFO: 	Container delcm-volume-test ready: true, restart count 0
Mar 25 17:17:48.212: INFO: 	Container updcm-volume-test ready: true, restart count 0
Mar 25 17:17:48.212: INFO: kindnet-jmhgw started at 2021-03-25 12:24:39 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.212: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 25 17:17:48.212: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.212: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 25 17:17:48.212: INFO: var-expansion-c683c01d-6c5f-42f6-9e2f-10bc28117056 started at 2021-03-25 17:17:43 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.212: INFO: 	Container dapi-container ready: false, restart count 0
Mar 25 17:17:48.212: INFO: nodeport-update-service-qmglf started at 2021-03-25 17:15:36 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.212: INFO: 	Container nodeport-update-service ready: true, restart count 0
Mar 25 17:17:48.212: INFO: execpod7jz4q started at 2021-03-25 17:15:43 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.212: INFO: 	Container agnhost-container ready: true, restart count 0
W0325 17:17:48.267755       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 25 17:17:48.439: INFO: 
Latency metrics for node latest-worker
Mar 25 17:17:48.439: INFO: 
Logging node info for node latest-worker2
Mar 25 17:17:48.442: INFO: Node Info: &Node{ObjectMeta:{latest-worker2    525d2fa2-95f1-4436-b726-c3866136dd3a 1266965 0 2021-03-22 08:06:55 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 15:38:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 15:56:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 17:17:01 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 17:17:01 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 17:17:01 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 17:17:01 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 25 17:17:48.443: INFO: 
Logging kubelet events for node latest-worker2
Mar 25 17:17:48.445: INFO: 
Logging pods the kubelet thinks is on node latest-worker2
Mar 25 17:17:48.451: INFO: startup-cbd8f096-e569-472a-b9d5-dc374631ec4a started at 2021-03-25 17:17:11 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.451: INFO: 	Container busybox ready: false, restart count 0
Mar 25 17:17:48.451: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.451: INFO: 	Container volume-tester ready: false, restart count 0
Mar 25 17:17:48.451: INFO: ss-0 started at 2021-03-25 17:16:54 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.451: INFO: 	Container webserver ready: false, restart count 0
Mar 25 17:17:48.451: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.451: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 25 17:17:48.451: INFO: privileged-pod started at 2021-03-25 17:17:02 +0000 UTC (0+2 container statuses recorded)
Mar 25 17:17:48.451: INFO: 	Container not-privileged-container ready: false, restart count 0
Mar 25 17:17:48.451: INFO: 	Container privileged-container ready: false, restart count 0
Mar 25 17:17:48.451: INFO: kindnet-f7zk8 started at 2021-03-25 12:10:50 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.451: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 25 17:17:48.451: INFO: nodeport-update-service-bsbjv started at 2021-03-25 17:15:36 +0000 UTC (0+1 container statuses recorded)
Mar 25 17:17:48.451: INFO: 	Container nodeport-update-service ready: true, restart count 0
W0325 17:17:48.456447       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 25 17:17:48.582: INFO: 
Latency metrics for node latest-worker2
Mar 25 17:17:48.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6337" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

• Failure [133.867 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  Mar 25 17:17:47.913: Unexpected error:
      <*errors.errorString | 0xc004d948f0>: {
          s: "no subset of available IP address found for the endpoint nodeport-update-service within timeout 2m0s",
      }
      no subset of available IP address found for the endpoint nodeport-update-service within timeout 2m0s
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":51,"completed":36,"skipped":5368,"failed":3,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:17:48.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
STEP: creating service-disabled in namespace services-1075
STEP: creating service service-proxy-disabled in namespace services-1075
STEP: creating replication controller service-proxy-disabled in namespace services-1075
I0325 17:17:49.003555       7 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-1075, replica count: 3
I0325 17:17:52.056005       7 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 17:17:55.057160       7 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 17:17:58.057725       7 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-1075
STEP: creating service service-proxy-toggled in namespace services-1075
STEP: creating replication controller service-proxy-toggled in namespace services-1075
I0325 17:17:58.096183       7 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-1075, replica count: 3
I0325 17:18:01.147790       7 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 17:18:04.148262       7 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Mar 25 17:18:04.152: INFO: Creating new host exec pod
Mar 25 17:18:04.168: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:18:06.278: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:18:08.172: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Mar 25 17:18:08.172: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Mar 25 17:18:12.190: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.118.124:80 2>&1 || true; echo; done" in pod services-1075/verify-service-up-host-exec-pod
Mar 25 17:18:12.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-1075 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.118.124:80 2>&1 || true; echo; done'
Mar 25 17:18:12.617: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n"
Mar 25 17:18:12.618: INFO: stdout: "service-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\n"
Mar 25 17:18:12.618: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.118.124:80 2>&1 || true; echo; done" in pod services-1075/verify-service-up-exec-pod-zrfbk
Mar 25 17:18:12.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-1075 exec verify-service-up-exec-pod-zrfbk -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.118.124:80 2>&1 || true; echo; done'
Mar 25 17:18:13.058: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n"
Mar 25 17:18:13.058: INFO: stdout: "service-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1075
STEP: Deleting pod verify-service-up-exec-pod-zrfbk in namespace services-1075
STEP: verifying service-disabled is not up
Mar 25 17:18:13.174: INFO: Creating new host exec pod
Mar 25 17:18:13.561: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:18:15.649: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:18:17.637: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Mar 25 17:18:17.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-1075 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.216.226:80 && echo service-down-failed'
Mar 25 17:18:19.983: INFO: rc: 28
Mar 25 17:18:19.983: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.216.226:80 && echo service-down-failed" in pod services-1075/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-1075 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.216.226:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.96.216.226:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1075
STEP: adding service-proxy-name label
STEP: verifying service is not up
Mar 25 17:18:20.456: INFO: Creating new host exec pod
Mar 25 17:18:20.612: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:18:22.647: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:18:24.617: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Mar 25 17:18:24.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-1075 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.118.124:80 && echo service-down-failed'
Mar 25 17:18:26.874: INFO: rc: 28
Mar 25 17:18:26.874: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.118.124:80 && echo service-down-failed" in pod services-1075/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-1075 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.118.124:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.96.118.124:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1075
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Mar 25 17:18:27.624: INFO: Creating new host exec pod
Mar 25 17:18:27.830: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:18:29.954: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:18:31.834: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Mar 25 17:18:31.834: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Mar 25 17:18:35.894: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.118.124:80 2>&1 || true; echo; done" in pod services-1075/verify-service-up-host-exec-pod
Mar 25 17:18:35.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-1075 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.118.124:80 2>&1 || true; echo; done'
Mar 25 17:18:36.361: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n"
Mar 25 17:18:36.361: INFO: stdout: "service-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\n"
Mar 25 17:18:36.361: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.118.124:80 2>&1 || true; echo; done" in pod services-1075/verify-service-up-exec-pod-kqg75
Mar 25 17:18:36.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-1075 exec verify-service-up-exec-pod-kqg75 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.118.124:80 2>&1 || true; echo; done'
Mar 25 17:18:36.750: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.118.124:80\n+ echo\n"
Mar 25 17:18:36.751: INFO: stdout: "service-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-7xhjv\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-wkvvg\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\nservice-proxy-toggled-jrpq5\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1075
STEP: Deleting pod verify-service-up-exec-pod-kqg75 in namespace services-1075
STEP: verifying service-disabled is still not up
Mar 25 17:18:37.570: INFO: Creating new host exec pod
Mar 25 17:18:37.709: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:18:39.878: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:18:41.712: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:18:43.715: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Mar 25 17:18:43.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-1075 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.216.226:80 && echo service-down-failed'
Mar 25 17:18:45.918: INFO: rc: 28
Mar 25 17:18:45.918: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.216.226:80 && echo service-down-failed" in pod services-1075/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-1075 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.216.226:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.96.216.226:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1075
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:18:45.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1075" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

• [SLOW TEST:57.341 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":51,"completed":37,"skipped":5437,"failed":3,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should be able to handle large requests: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:18:45.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
STEP: Performing setup for networking test in namespace nettest-8126
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 25 17:18:46.254: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 25 17:18:46.442: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:18:48.458: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:18:50.476: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 17:18:52.451: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:18:54.548: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:18:56.447: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:18:58.447: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:19:00.446: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:19:02.462: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:19:04.463: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 17:19:06.499: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 25 17:19:06.523: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 25 17:19:08.566: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 25 17:19:10.527: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 25 17:19:16.601: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 25 17:19:16.601: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 25 17:19:16.715: INFO: Service node-port-service in namespace nettest-8126 found.
Mar 25 17:19:17.395: INFO: Service session-affinity-service in namespace nettest-8126 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 25 17:19:18.400: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 25 17:19:19.510: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) test-container-pod --> 10.96.244.116:90 (config.clusterIP)
Mar 25 17:19:19.517: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.109:9080/dial?request=echo%20nooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo&protocol=udp&host=10.96.244.116&port=90&tries=1'] Namespace:nettest-8126 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 17:19:19.517: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 17:19:19.645: INFO: Waiting for responses: map[]
Mar 25 17:19:19.645: INFO: reached 10.96.244.116 after 0/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:19:19.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8126" for this suite.

• [SLOW TEST:33.678 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: udp
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp","total":51,"completed":38,"skipped":5495,"failed":3,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:19:19.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
STEP: creating a service with no endpoints
STEP: creating execpod-noendpoints on node latest-worker
Mar 25 17:19:19.798: INFO: Creating new exec pod
Mar 25 17:19:25.087: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node latest-worker
Mar 25 17:19:25.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-8409 exec execpod-noendpoints4bwhh -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Mar 25 17:19:26.409: INFO: rc: 1
Mar 25 17:19:26.409: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-8409 exec execpod-noendpoints4bwhh -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:19:26.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8409" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

• [SLOW TEST:6.909 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":51,"completed":39,"skipped":5554,"failed":3,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Firewall rule 
  control plane should not expose well-known ports
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 17:19:26.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Mar 25 17:19:27.175: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 17:19:27.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-2567" for this suite.

S [SKIPPING] in Spec Setup (BeforeEach) [0.781 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  control plane should not expose well-known ports [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSMar 25 17:19:27.349: INFO: Running AfterSuite actions on all nodes
Mar 25 17:19:27.349: INFO: Running AfterSuite actions on node 1
Mar 25 17:19:27.349: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/sig_network/junit_01.xml
{"msg":"Test Suite completed","total":51,"completed":39,"skipped":5695,"failed":3,"failures":["[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}


Summarizing 3 Failures:

[Fail] [sig-network] Services [It] should allow pods to hairpin back to themselves through services 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1012

[Fail] [sig-network] Services [It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1201

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

Ran 42 of 5737 Specs in 2037.145 seconds
FAIL! -- 39 Passed | 3 Failed | 0 Pending | 5695 Skipped
--- FAIL: TestE2E (2037.28s)
FAIL